Re: [Intel-gfx] [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-03 Thread Martin Peres

On 02/07/2021 18:07, Michal Wajdeczko wrote:



On 02.07.2021 10:09, Martin Peres wrote:

On 02/07/2021 10:29, Pekka Paalanen wrote:

On Thu, 1 Jul 2021 21:28:06 +0200
Daniel Vetter  wrote:


On Thu, Jul 1, 2021 at 8:27 PM Martin Peres 
wrote:


On 01/07/2021 11:14, Pekka Paalanen wrote:

On Wed, 30 Jun 2021 11:58:25 -0700
John Harrison  wrote:
  

On 6/30/2021 01:22, Martin Peres wrote:

On 24/06/2021 10:05, Matthew Brost wrote:

From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio

Signed-off-by: Matthew Brost 
---
     drivers/gpu/drm/i915/gt/uc/intel_guc.h    |  1 +
     drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
     drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
     drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14
+-
     4 files changed, 19 insertions(+), 7 deletions(-)
   


...
  

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7a69c3c027e9..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct
intel_uc *uc)
     return;
     }
     -    /* Default: enable HuC authentication only */
-    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+    /* Intermediate platforms are HuC authentication only */
+    if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+    drm_dbg(>drm, "Disabling GuC only due to old
platform\n");


This comment does not seem accurate, given that DG1 is barely
out, and
ADL is not out yet. How about:

"Disabling GuC on untested platforms"?
   

Just because something is not in the shops yet does not mean it is
new.
Technology is always obsolete by the time it goes on sale.


That is a very good reason to not use terminology like "new", "old",
"current", "modern" etc. at all.

End users like me definitely do not share your interpretation of
"old".


Yep, old and new is relative. In the end, what matters is the
validation
effort, which is why I was proposing "untested platforms".

Also, remember that you are not writing these messages for Intel
engineers, but instead are writing for Linux *users*.


It's drm_dbg. Users don't read this stuff, at least not users with no
clue what the driver does and stuff like that.


If I had a problem, I would read it, and I have no clue what anything
of that is.


Exactly.

This level of defense for what is clearly a bad *debug* message (at the
very least, the grammar) makes no sense at all!

I don't want to hear arguments like "Not my patch" from a developer
literally sending the patch to the ML and who added his SoB to the
patch, playing with words, or minimizing the problem of having such a
message.


Agree that 'not my patch' is never a good excuse, but equally we can't
blame original patch author as patch was updated few times since then.


I never wanted to blame the author here, I was only speaking about the 
handling of feedback on the patch.




Maybe to avoid confusions and simplify reviews, we could split this
patch into two smaller: first one that really unblocks GuC submission on
all Gen11+ (see __guc_submission_supported) and second one that updates
defaults for Gen12+ (see uc_expand_default_options), as original patch
(from ~2019) evolved more than what title/commit message says.


Both work for me, as long as it is a collaborative effort.

Cheers,
Martin



Then we can fix all messaging and make sure it's clear and understood.

Thanks,
Michal



All of the above are just clear signals for the community to get off
your playground, which is frankly unacceptable. Your email address does
not matter.

In the spirit of collaboration, your response should have been "Good
catch, how about  or ?". This would not have wasted everyone's
time in an attempt to just have it your way.

My level of confidence in this GuC transition was already low, but you
guys are working hard to shoot yourself in the foot. Trust should be
earned!

Martin




Thanks,
pq


___
Intel-gfx mailing list
intel-...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-02 Thread Martin Peres

On 02/07/2021 16:06, Michal Wajdeczko wrote:



On 02.07.2021 10:13, Martin Peres wrote:

On 01/07/2021 21:24, Martin Peres wrote:
[...]





+    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+    return;
+    }
+
+    /* Default: enable HuC authentication and GuC submission */
+    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC |
ENABLE_GUC_SUBMISSION;


This seems to be in contradiction with the GuC submission plan which
states:

"Not enabled by default on any current platforms but can be enabled via
modparam enable_guc".



I don't believe any current platform gets this point where GuC
submission would be enabled by default. The first would be ADL-P which
isn't out yet.


Isn't that exactly what the line above does?


In case you missed this crucial part of the review. Please answer the
above question.


I guess there is some misunderstanding here, and I must admit I had
similar doubt, but if you look beyond patch diff and check function code
you will find that the very condition is:

/* Don't enable GuC/HuC on pre-Gen12 */
if (GRAPHICS_VER(i915) < 12) {
i915->params.enable_guc = 0;
return;
}

so all pre-Gen12 platforms will continue to have GuC/HuC disabled.


Thanks Michal, but then the problem is the other way: how can one enable 
it on gen11?


I like what Daniele was going for here: separating the capability from 
the user-requested value, but then it seems the patch stopped half way. 
How about never touching the parameter, and having a AND between the two 
values to get the effective enable_guc?


Right now, the code is really confusing :s

Thanks,
Martin



Thanks,
Michal



Re: [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-02 Thread Martin Peres

On 01/07/2021 21:24, Martin Peres wrote:
[...]





+    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+    return;
+    }
+
+    /* Default: enable HuC authentication and GuC submission */
+    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | 
ENABLE_GUC_SUBMISSION;


This seems to be in contradiction with the GuC submission plan which 
states:


"Not enabled by default on any current platforms but can be enabled via
modparam enable_guc".



I don't believe any current platform gets this point where GuC
submission would be enabled by default. The first would be ADL-P which
isn't out yet.


Isn't that exactly what the line above does?


In case you missed this crucial part of the review. Please answer the 
above question.


Cheers,
Martin


Re: [Intel-gfx] [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-02 Thread Martin Peres

On 02/07/2021 10:29, Pekka Paalanen wrote:

On Thu, 1 Jul 2021 21:28:06 +0200
Daniel Vetter  wrote:


On Thu, Jul 1, 2021 at 8:27 PM Martin Peres  wrote:


On 01/07/2021 11:14, Pekka Paalanen wrote:

On Wed, 30 Jun 2021 11:58:25 -0700
John Harrison  wrote:
  

On 6/30/2021 01:22, Martin Peres wrote:

On 24/06/2021 10:05, Matthew Brost wrote:

From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
drivers/gpu/drm/i915/gt/uc/intel_guc.h|  1 +
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14 +-
4 files changed, 19 insertions(+), 7 deletions(-)
  


...
  

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7a69c3c027e9..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct
intel_uc *uc)
return;
}
-/* Default: enable HuC authentication only */
-i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+/* Intermediate platforms are HuC authentication only */
+if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+drm_dbg(>drm, "Disabling GuC only due to old
platform\n");


This comment does not seem accurate, given that DG1 is barely out, and
ADL is not out yet. How about:

"Disabling GuC on untested platforms"?
  

Just because something is not in the shops yet does not mean it is new.
Technology is always obsolete by the time it goes on sale.


That is a very good reason to not use terminology like "new", "old",
"current", "modern" etc. at all.

End users like me definitely do not share your interpretation of "old".


Yep, old and new is relative. In the end, what matters is the validation
effort, which is why I was proposing "untested platforms".

Also, remember that you are not writing these messages for Intel
engineers, but instead are writing for Linux *users*.


It's drm_dbg. Users don't read this stuff, at least not users with no
clue what the driver does and stuff like that.


If I had a problem, I would read it, and I have no clue what anything
of that is.


Exactly.

This level of defense for what is clearly a bad *debug* message (at the 
very least, the grammar) makes no sense at all!


I don't want to hear arguments like "Not my patch" from a developer 
literally sending the patch to the ML and who added his SoB to the 
patch, playing with words, or minimizing the problem of having such a 
message.


All of the above are just clear signals for the community to get off 
your playground, which is frankly unacceptable. Your email address does 
not matter.


In the spirit of collaboration, your response should have been "Good 
catch, how about  or ?". This would not have wasted everyone's 
time in an attempt to just have it your way.


My level of confidence in this GuC transition was already low, but you 
guys are working hard to shoot yourself in the foot. Trust should be earned!


Martin




Thanks,
pq



Re: [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-01 Thread Martin Peres

On 01/07/2021 11:14, Pekka Paalanen wrote:

On Wed, 30 Jun 2021 11:58:25 -0700
John Harrison  wrote:


On 6/30/2021 01:22, Martin Peres wrote:

On 24/06/2021 10:05, Matthew Brost wrote:

From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
   drivers/gpu/drm/i915/gt/uc/intel_guc.h    |  1 +
   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
   drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14 +-
   4 files changed, 19 insertions(+), 7 deletions(-)



...


diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7a69c3c027e9..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct
intel_uc *uc)
   return;
   }
   -    /* Default: enable HuC authentication only */
-    i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+    /* Intermediate platforms are HuC authentication only */
+    if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+    drm_dbg(>drm, "Disabling GuC only due to old
platform\n");


This comment does not seem accurate, given that DG1 is barely out, and
ADL is not out yet. How about:

"Disabling GuC on untested platforms"?
  

Just because something is not in the shops yet does not mean it is new.
Technology is always obsolete by the time it goes on sale.


That is a very good reason to not use terminology like "new", "old",
"current", "modern" etc. at all.

End users like me definitely do not share your interpretation of "old".


Yep, old and new is relative. In the end, what matters is the validation 
effort, which is why I was proposing "untested platforms".


Also, remember that you are not writing these messages for Intel 
engineers, but instead are writing for Linux *users*.


Cheers,
Martin




Thanks,
pq



And the issue is not a lack of testing, it is a question of whether we
are allowed to change the default on something that has already started
being used by customers or not (including pre-release beta customers).
I.e. it is basically a political decision not an engineering decision.




Re: [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-07-01 Thread Martin Peres

On 30/06/2021 21:00, Matthew Brost wrote:

On Wed, Jun 30, 2021 at 11:22:38AM +0300, Martin Peres wrote:



On 24/06/2021 10:05, Matthew Brost wrote:

From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
   drivers/gpu/drm/i915/gt/uc/intel_guc.h|  1 +
   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
   drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14 +-
   4 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index fae01dc8e1b9..77981788204f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -54,6 +54,7 @@ struct intel_guc {
struct ida guc_ids;
struct list_head guc_id_list;
+   bool submission_supported;
bool submission_selected;
struct i915_vma *ads_vma;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index a427336ce916..405339202280 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2042,6 +2042,13 @@ void intel_guc_submission_disable(struct intel_guc *guc)
/* Note: By the time we're here, GuC may have already been reset */
   }
+static bool __guc_submission_supported(struct intel_guc *guc)
+{
+   /* GuC submission is unavailable for pre-Gen11 */
+   return intel_guc_is_supported(guc) &&
+  INTEL_GEN(guc_to_gt(guc)->i915) >= 11;
+}
+
   static bool __guc_submission_selected(struct intel_guc *guc)
   {
struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
@@ -2054,6 +2061,7 @@ static bool __guc_submission_selected(struct intel_guc 
*guc)
   void intel_guc_submission_init_early(struct intel_guc *guc)
   {
+   guc->submission_supported = __guc_submission_supported(guc);
guc->submission_selected = __guc_submission_selected(guc);
   }
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
index a2a3fad72be1..be767eb6ff71 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -37,8 +37,7 @@ int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
   static inline bool intel_guc_submission_is_supported(struct intel_guc *guc)
   {
-   /* XXX: GuC submission is unavailable for now */
-   return false;
+   return guc->submission_supported;
   }
   static inline bool intel_guc_submission_is_wanted(struct intel_guc *guc)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7a69c3c027e9..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct intel_uc *uc)
return;
}
-   /* Default: enable HuC authentication only */
-   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   /* Intermediate platforms are HuC authentication only */
+   if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+   drm_dbg(>drm, "Disabling GuC only due to old platform\n");


This comment does not seem accurate, given that DG1 is barely out, and ADL
is not out yet. How about:

"Disabling GuC on untested platforms"?


This isn't my comment but it seems right to me. AFAIK this describes the
current PR but it is subject to change (i.e. we may enable GuC on DG1 by
default at some point).


Well, it's pretty bad PR to say that DG1 and ADL are old when they are 
not even out ;)


But seriously, fix this sentence, it makes no sense at all unless you 
are really trying to confuse non-native speakers (and annoy language 
purists too).







+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   return;
+   }
+
+   /* Default: enable HuC authentication and GuC submission */
+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION;


This seems to be in contradiction with the GuC submission plan which states:

"Not enabled by default on any current platforms but can be enabled via
modparam enable_guc".



I don't believe any current platform gets this point where GuC
submission would be enabled by default. The first would be ADL-P which
isn't out yet.


Isn't that exactly what the line above does?




When you rework the patch, could you please add a warning when the user
force-enables the GuC Command Submission? Something like:

"WARNING: The user force-enabled the experimental GuC command submission
backend using i915.enable_guc. Please disable it if experiencing stability
issues. No bug 

Re: [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-06-30 Thread Martin Peres

On 29/06/2021 22:35, Matthew Brost wrote:

Add entry for i915 new parallel submission uAPI plan.

v2:
  (Daniel Vetter):
   - Expand logical order explaination
   - Add dummy header
   - Only allow N BBs in execbuf IOCTL
   - Configure parallel submission per slot not per gem context
v3:
  (Marcin Ślusarz):
   - Lot's of typos / bad english fixed
  (Tvrtko Ursulin):
   - Consistent pseudo code, clean up wording in descriptions
v4:
  (Daniel Vetter)
   - Drop flags
   - Add kernel doc
   - Reword a few things / fix typos
  (Tvrtko)
   - Reword a few things / fix typos
v5:
  (Checkpatch)
   - Fix typos
  (Docs)
   - Fix warning

Cc: Tvrtko Ursulin 
Cc: Tony Ye 
CC: Carl Zhang 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Signed-off-by: Matthew Brost 
Acked-by: Daniel Vetter 
Acked-by: Tony Ye 
---
  Documentation/gpu/rfc/i915_parallel_execbuf.h | 122 ++
  Documentation/gpu/rfc/i915_scheduler.rst  |  59 -
  2 files changed, 180 insertions(+), 1 deletion(-)
  create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h

diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index ..8cbe2c4e0172
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
i915_context_engines_parallel_submit */
+
+/**
+ * struct drm_i915_context_engines_parallel_submit - Configure engine for
+ * parallel submission.
+ *
+ * Setup a slot in the context engine map to allow multiple BBs to be submitted
+ * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the 
GPU
+ * in parallel. Multiple hardware contexts are created internally in the i915
+ * run these BBs.


The previous sentence makes little sense.


 Once a slot is configured for N BBs only N BBs can be
+ * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user
+ * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how
+ * many BBs there are based on the slot's configuration. The N BBs are the last
+ * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
+ *
+ * The default placement behavior is to create implicit bonds between each
+ * context if each context maps to more than 1 physical engine (e.g. context is
+ * a virtual engine). Also we only allow contexts of same engine class and 
these
+ * contexts must be in logically contiguous order. Examples of the placement
+ * behavior described below. Lastly, the default is to not allow BBs to
+ * preempted mid BB rather insert coordinated preemption on all hardware
+ * contexts between each set of BBs. Flags may be added in the future to change
+ * both of these default behaviors.
+ *
+ * Returns -EINVAL if hardware context placement configuration is invalid or if
+ * the placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * interface.
+ *
+ * .. code-block:: none
+ *
+ * Example 1 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=1,
+ *  engines=CS[0],CS[1])
+ *
+ * Results in the following valid placement:
+ * CS[0], CS[1]
+ *
+ * Example 2 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ *  engines=CS[0],CS[2],CS[1],CS[3])
+ *
+ * Results in the following valid placements:
+ * CS[0], CS[1]
+ * CS[2], CS[3]
+ *
+ * This can also be thought of as 2 virtual engines described by 2-D array
+ * in the engines the field with bonds placed between each index of the
+ * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
+ * CS[3].
+ * VE[0] = CS[0], CS[2]
+ * VE[1] = CS[1], CS[3]
+ *
+ * Example 3 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ *  engines=CS[0],CS[1],CS[1],CS[3])
+ *
+ * Results in the following valid and invalid placements:
+ * CS[0], CS[1]
+ * CS[1], CS[3] - Not logical contiguous, return -EINVAL
+ */
+struct drm_i915_context_engines_parallel_submit {
+   /**
+* @base: base user extension.
+*/
+   struct i915_user_extension base;
+
+   /**
+* @engine_index: slot for parallel engine
+*/
+   __u16 

Re: [PATCH 47/47] drm/i915/guc: Unblock GuC submission on Gen11+

2021-06-30 Thread Martin Peres




On 24/06/2021 10:05, Matthew Brost wrote:

From: Daniele Ceraolo Spurio 

Unblock GuC submission on Gen11+ platforms.

Signed-off-by: Michal Wajdeczko 
Signed-off-by: Daniele Ceraolo Spurio 
Signed-off-by: Matthew Brost 
---
  drivers/gpu/drm/i915/gt/uc/intel_guc.h|  1 +
  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c |  8 
  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h |  3 +--
  drivers/gpu/drm/i915/gt/uc/intel_uc.c | 14 +-
  4 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h 
b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index fae01dc8e1b9..77981788204f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -54,6 +54,7 @@ struct intel_guc {
struct ida guc_ids;
struct list_head guc_id_list;
  
+	bool submission_supported;

bool submission_selected;
  
  	struct i915_vma *ads_vma;

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index a427336ce916..405339202280 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2042,6 +2042,13 @@ void intel_guc_submission_disable(struct intel_guc *guc)
/* Note: By the time we're here, GuC may have already been reset */
  }
  
+static bool __guc_submission_supported(struct intel_guc *guc)

+{
+   /* GuC submission is unavailable for pre-Gen11 */
+   return intel_guc_is_supported(guc) &&
+  INTEL_GEN(guc_to_gt(guc)->i915) >= 11;
+}
+
  static bool __guc_submission_selected(struct intel_guc *guc)
  {
struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
@@ -2054,6 +2061,7 @@ static bool __guc_submission_selected(struct intel_guc 
*guc)
  
  void intel_guc_submission_init_early(struct intel_guc *guc)

  {
+   guc->submission_supported = __guc_submission_supported(guc);
guc->submission_selected = __guc_submission_selected(guc);
  }
  
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h

index a2a3fad72be1..be767eb6ff71 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -37,8 +37,7 @@ int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
  
  static inline bool intel_guc_submission_is_supported(struct intel_guc *guc)

  {
-   /* XXX: GuC submission is unavailable for now */
-   return false;
+   return guc->submission_supported;
  }
  
  static inline bool intel_guc_submission_is_wanted(struct intel_guc *guc)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c 
b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
index 7a69c3c027e9..61be0aa81492 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
@@ -34,8 +34,15 @@ static void uc_expand_default_options(struct intel_uc *uc)
return;
}
  
-	/* Default: enable HuC authentication only */

-   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   /* Intermediate platforms are HuC authentication only */
+   if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) {
+   drm_dbg(>drm, "Disabling GuC only due to old platform\n");


This comment does not seem accurate, given that DG1 is barely out, and 
ADL is not out yet. How about:


"Disabling GuC on untested platforms"?


+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC;
+   return;
+   }
+
+   /* Default: enable HuC authentication and GuC submission */
+   i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION;


This seems to be in contradiction with the GuC submission plan which 
states:


"Not enabled by default on any current platforms but can be enabled via 
modparam enable_guc".


When you rework the patch, could you please add a warning when the user 
force-enables the GuC Command Submission? Something like:


"WARNING: The user force-enabled the experimental GuC command submission 
backend using i915.enable_guc. Please disable it if experiencing 
stability issues. No bug reports will be accepted on this backend".


This should allow you to work on the backend, while communicating 
clearly to users that it is not ready just yet. Once it has matured, the 
warning can be removed.


Cheers,
Martin


  }
  
  /* Reset GuC providing us with fresh state for both GuC and HuC.

@@ -313,9 +320,6 @@ static int __uc_init(struct intel_uc *uc)
if (i915_inject_probe_failure(uc_to_gt(uc)->i915))
return -ENOMEM;
  
-	/* XXX: GuC submission is unavailable for now */

-   GEM_BUG_ON(intel_uc_uses_guc_submission(uc));
-
ret = intel_guc_init(guc);
if (ret)
return ret;



Re: [PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler

2021-06-30 Thread Martin Peres

On 29/06/2021 22:35, Matthew Brost wrote:

Add entry for i915 GuC submission / DRM scheduler integration plan.
Follow up patch with details of new parallel submission uAPI to come.

v2:
  (Daniel Vetter)
   - Expand explaination of why bonding isn't supported for GuC
 submission
   - CC some of the DRM scheduler maintainers
   - Add priority inheritance / boosting use case
   - Add reasoning for removing in order assumptions
  (Daniel Stone)
   - Add links to priority spec
v4:
  (Tvrtko)
   - Add TODOs section
  (Daniel Vetter)
   - Pull in 1 line from following patch
v5:
  (Checkpatch)
   - Fix typos

Cc: Christian König 
Cc: Luben Tuikov 
Cc: Alex Deucher 
Cc: Steven Price 
Cc: Jon Bloomfield 
Cc: Dave Airlie 
Cc: Daniel Vetter 
Cc: Jason Ekstrand 
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Matthew Brost 
Reviewed-by: Daniel Vetter 
Acked-by: Dave Airlie 
---
  Documentation/gpu/rfc/i915_scheduler.rst | 91 
  Documentation/gpu/rfc/index.rst  |  4 ++
  2 files changed, 95 insertions(+)
  create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst

diff --git a/Documentation/gpu/rfc/i915_scheduler.rst 
b/Documentation/gpu/rfc/i915_scheduler.rst
new file mode 100644
index ..7acd386a6b49
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -0,0 +1,91 @@
+=
+I915 GuC Submission/DRM Scheduler Section
+=
+
+Upstream plan
+=
+For upstream the overall plan for landing GuC submission and integrating the
+i915 with the DRM scheduler is:
+
+* Merge basic GuC submission
+   * Basic submission support for all gen11+ platforms
+   * Not enabled by default on any current platforms but can be enabled via
+ modparam enable_guc


Just a quick reminder that a lot of users have this parameter set in 
their grub command line because of the incorrect assumption that quick 
sync requires the HuC which requires the GuC command submission.


Just bear that in mind and make sure there is a minimum of quality in 
the GuC backend before allowing users to use the GuC, even behind this 
kernel parameter.


Maybe a warning in the kernel logs saying that this feature is 
experimental, and use "enable_guc=1" for the fullest quick sync support 
will improve the user satisfaction, and save you some time handling 
unwanted bugs.



+   * Lots of rework will need to be done to integrate with DRM scheduler so
+ no need to nit pick everything in the code, it just should be
+ functional, no major coding style / layering errors, and not regress
+ execlists
+   * Update IGTs / selftests as needed to work with GuC submission
+   * Enable CI on supported platforms for a baseline


What's the plan to keep the comparison between GuC and execlists? The CI 
machines cannot just test the GuC as it would expose users to 
regressions in the default mode, and machines are hard to come by in the 
CI lab.



+   * Rework / get CI heathly for GuC submission in place as needed
+* Merge new parallel submission uAPI
+   * Bonding uAPI completely incompatible with GuC submission, plus it has
+ severe design issues in general, which is why we want to retire it no
+ matter what
+   * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
+ which configures a slot with N contexts
+   * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
+ a slot in a single execbuf IOCTL and the batches run on the GPU in
+ paralllel


FYI, this is a typo


+   * Initially only for GuC submission but execlists can be supported if
+ needed
+* Convert the i915 to use the DRM scheduler
+   * GuC submission backend fully integrated with DRM scheduler
+   * All request queues removed from backend (e.g. all backpressure
+ handled in DRM scheduler)
+   * Resets / cancels hook in DRM scheduler
+   * Watchdog hooks into DRM scheduler
+   * Lots of complexity of the GuC backend can be pulled out once
+ integrated with DRM scheduler (e.g. state machine gets
+ simplier, locking gets simplier, etc...)


As much as I would like more consistency in the English language, 
simplier is not a word in the dictionary.



+   * Execlists backend will minimum required to hook in the DRM scheduler


Can't parse this sentence. How about: "Minimum integration of the 
execlists backend in the DRM scheduler"?


Other than these typographical nitpicks, I think the plan is sane 
provided that you can find enough CI machines and add the warning.



+   * Legacy interface
+   * Features like timeslicing / preemption / virtual engines would
+ be difficult to integrate with the DRM scheduler and these
+ features are not required for GuC submission as the 

Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-12 Thread Martin Peres

On 11/05/2021 19:39, Matthew Brost wrote:

On Tue, May 11, 2021 at 08:26:59AM -0700, Bloomfield, Jon wrote:

-Original Message-
From: Martin Peres 
Sent: Tuesday, May 11, 2021 1:06 AM
To: Daniel Vetter 
Cc: Jason Ekstrand ; Brost, Matthew
; intel-gfx ;
dri-devel ; Ursulin, Tvrtko
; Ekstrand, Jason ;
Ceraolo Spurio, Daniele ; Bloomfield, Jon
; Vetter, Daniel ;
Harrison, John C 
Subject: Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

On 10/05/2021 19:33, Daniel Vetter wrote:

On Mon, May 10, 2021 at 3:55 PM Martin Peres 

wrote:


On 10/05/2021 02:11, Jason Ekstrand wrote:

On May 9, 2021 12:12:36 Martin Peres  wrote:


Hi,

On 06/05/2021 22:13, Matthew Brost wrote:

Basic GuC submission support. This is the first bullet point in the
upstreaming plan covered in the following RFC [1].

At a very high level the GuC is a piece of firmware which sits between
the i915 and the GPU. It offloads some of the scheduling of contexts
from the i915 and programs the GPU to submit contexts. The i915
communicates with the GuC and the GuC communicates with the

GPU.


May I ask what will GuC command submission do that execlist

won't/can't

do? And what would be the impact on users? Even forgetting the

troubled

history of GuC (instability, performance regression, poor level of user
support, 6+ years of trying to upstream it...), adding this much code
and doubling the amount of validation needed should come with a
rationale making it feel worth it... and I am not seeing here. Would you
mind providing the rationale behind this work?



GuC submission will be disabled by default on all current upstream
platforms behind a module parameter - enable_guc. A value of 3 will
enable submission and HuC loading via the GuC. GuC submission

should

work on all gen11+ platforms assuming the GuC firmware is present.


What is the plan here when it comes to keeping support for execlist? I
am afraid that landing GuC support in Linux is the first step towards
killing the execlist, which would force users to use proprietary
firmwares that even most Intel engineers have little influence over.
Indeed, if "drm/i915/guc: Disable semaphores when using GuC

scheduling"

which states "Disable semaphores when using GuC scheduling as

semaphores

are broken in the current GuC firmware." is anything to go by, it means
that even Intel developers seem to prefer working around the GuC
firmware, rather than fixing it.


Yes, landing GuC support may be the first step in removing execlist
support. The inevitable reality is that GPU scheduling is coming and
likely to be there only path in the not-too-distant future. (See also
the ongoing thread with AMD about fences.) I'm not going to pass
judgement on whether or not this is a good thing.  I'm just reading the
winds and, in my view, this is where things are headed for good or ill.

In answer to the question above, the answer to "what do we gain from
GuC?" may soon be, "you get to use your GPU."  We're not there yet

and,

again, I'm not necessarily advocating for it, but that is likely where
things are headed.


This will be a sad day, especially since it seems fundamentally opposed
with any long-term support, on top of taking away user freedom to
fix/tweak their system when Intel won't.


A firmware-based submission model isn't a bad design IMO and, aside

from

the firmware freedom issues, I think there are actual advantages to the
model. Immediately, it'll unlock a few features like parallel submission
(more on that in a bit) and long-running compute because they're
implemented in GuC and the work to implement them properly in the
execlist scheduler is highly non-trivial. Longer term, it may (no
guarantees) unlock some performance by getting the kernel out of the

way.


Oh, I definitely agree with firmware-based submission model not being a
bad design. I was even cheering for it in 2015. Experience with it made
me regret that deeply since :s

But with the DRM scheduler being responsible for most things, I fail to
see what we could offload in the GuC except context switching (like
every other manufacturer). The problem is, the GuC does way more than
just switching registers in bulk, and if the number of revisions of the
GuC is anything to go by, it is way too complex for me to feel
comfortable with it.


We need to flesh out that part of the plan more, but we're not going
to use drm scheduler for everything. It's only to handle the dma-fence
legacy side of things, which means:
- timeout handling for batches that take too long
- dma_fence dependency sorting/handling
- boosting of context from display flips (currently missing, needs to
be ported from drm/i915)

The actual round-robin/preempt/priority handling is still left to the
backend, in this case here the fw. So there's large chunks of
code/functionality where drm/scheduler wont be involved in, and like
Jason says: The hw direction winds definitely blow in the direction
that this is a

Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-11 Thread Martin Peres

On 10/05/2021 19:33, Daniel Vetter wrote:

On Mon, May 10, 2021 at 3:55 PM Martin Peres  wrote:


On 10/05/2021 02:11, Jason Ekstrand wrote:

On May 9, 2021 12:12:36 Martin Peres  wrote:


Hi,

On 06/05/2021 22:13, Matthew Brost wrote:

Basic GuC submission support. This is the first bullet point in the
upstreaming plan covered in the following RFC [1].

At a very high level the GuC is a piece of firmware which sits between
the i915 and the GPU. It offloads some of the scheduling of contexts
from the i915 and programs the GPU to submit contexts. The i915
communicates with the GuC and the GuC communicates with the GPU.


May I ask what will GuC command submission do that execlist won't/can't
do? And what would be the impact on users? Even forgetting the troubled
history of GuC (instability, performance regression, poor level of user
support, 6+ years of trying to upstream it...), adding this much code
and doubling the amount of validation needed should come with a
rationale making it feel worth it... and I am not seeing here. Would you
mind providing the rationale behind this work?



GuC submission will be disabled by default on all current upstream
platforms behind a module parameter - enable_guc. A value of 3 will
enable submission and HuC loading via the GuC. GuC submission should
work on all gen11+ platforms assuming the GuC firmware is present.


What is the plan here when it comes to keeping support for execlist? I
am afraid that landing GuC support in Linux is the first step towards
killing the execlist, which would force users to use proprietary
firmwares that even most Intel engineers have little influence over.
Indeed, if "drm/i915/guc: Disable semaphores when using GuC scheduling"
which states "Disable semaphores when using GuC scheduling as semaphores
are broken in the current GuC firmware." is anything to go by, it means
that even Intel developers seem to prefer working around the GuC
firmware, rather than fixing it.


Yes, landing GuC support may be the first step in removing execlist
support. The inevitable reality is that GPU scheduling is coming and
likely to be there only path in the not-too-distant future. (See also
the ongoing thread with AMD about fences.) I'm not going to pass
judgement on whether or not this is a good thing.  I'm just reading the
winds and, in my view, this is where things are headed for good or ill.

In answer to the question above, the answer to "what do we gain from
GuC?" may soon be, "you get to use your GPU."  We're not there yet and,
again, I'm not necessarily advocating for it, but that is likely where
things are headed.


This will be a sad day, especially since it seems fundamentally opposed
with any long-term support, on top of taking away user freedom to
fix/tweak their system when Intel won't.


A firmware-based submission model isn't a bad design IMO and, aside from
the firmware freedom issues, I think there are actual advantages to the
model. Immediately, it'll unlock a few features like parallel submission
(more on that in a bit) and long-running compute because they're
implemented in GuC and the work to implement them properly in the
execlist scheduler is highly non-trivial. Longer term, it may (no
guarantees) unlock some performance by getting the kernel out of the way.


Oh, I definitely agree with firmware-based submission model not being a
bad design. I was even cheering for it in 2015. Experience with it made
me regret that deeply since :s

But with the DRM scheduler being responsible for most things, I fail to
see what we could offload in the GuC except context switching (like
every other manufacturer). The problem is, the GuC does way more than
just switching registers in bulk, and if the number of revisions of the
GuC is anything to go by, it is way too complex for me to feel
comfortable with it.


We need to flesh out that part of the plan more, but we're not going
to use drm scheduler for everything. It's only to handle the dma-fence
legacy side of things, which means:
- timeout handling for batches that take too long
- dma_fence dependency sorting/handling
- boosting of context from display flips (currently missing, needs to
be ported from drm/i915)

The actual round-robin/preempt/priority handling is still left to the
backend, in this case here the fw. So there's large chunks of
code/functionality where drm/scheduler wont be involved in, and like
Jason says: The hw direction winds definitely blow in the direction
that this is all handled in hw.


The plan makes sense for a SRIOV-enable GPU, yes.

However, if the GuC is actually helping i915, then why not open source 
it and drop all the issues related to its stability? Wouldn't it be the 
perfect solution, as it would allow dropping execlist support for newer 
HW, and it would eliminate the concerns about maintenance of stable 
releases of Linux?





In the same vein, I have another concern related to the impact of GuC on
Linux's stable re

Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-11 Thread Martin Peres

On 10/05/2021 19:25, Jason Ekstrand wrote:

On May 10, 2021 08:55:55 Martin Peres  wrote:


On 10/05/2021 02:11, Jason Ekstrand wrote:

On May 9, 2021 12:12:36 Martin Peres  wrote:


Hi,

On 06/05/2021 22:13, Matthew Brost wrote:

Basic GuC submission support. This is the first bullet point in the
upstreaming plan covered in the following RFC [1].

At a very high level the GuC is a piece of firmware which sits between
the i915 and the GPU. It offloads some of the scheduling of contexts
from the i915 and programs the GPU to submit contexts. The i915
communicates with the GuC and the GuC communicates with the GPU.


May I ask what will GuC command submission do that execlist won't/can't
do? And what would be the impact on users? Even forgetting the troubled
history of GuC (instability, performance regression, poor level of user
support, 6+ years of trying to upstream it...), adding this much code
and doubling the amount of validation needed should come with a
rationale making it feel worth it... and I am not seeing here. Would you
mind providing the rationale behind this work?



GuC submission will be disabled by default on all current upstream
platforms behind a module parameter - enable_guc. A value of 3 will
enable submission and HuC loading via the GuC. GuC submission should
work on all gen11+ platforms assuming the GuC firmware is present.


What is the plan here when it comes to keeping support for execlist? I
am afraid that landing GuC support in Linux is the first step towards
killing the execlist, which would force users to use proprietary
firmwares that even most Intel engineers have little influence over.
Indeed, if "drm/i915/guc: Disable semaphores when using GuC scheduling"
which states "Disable semaphores when using GuC scheduling as semaphores
are broken in the current GuC firmware." is anything to go by, it means
that even Intel developers seem to prefer working around the GuC
firmware, rather than fixing it.


Yes, landing GuC support may be the first step in removing execlist
support. The inevitable reality is that GPU scheduling is coming and
likely to be there only path in the not-too-distant future. (See also
the ongoing thread with AMD about fences.) I'm not going to pass
judgement on whether or not this is a good thing.  I'm just reading the
winds and, in my view, this is where things are headed for good or ill.

In answer to the question above, the answer to "what do we gain from
GuC?" may soon be, "you get to use your GPU."  We're not there yet and,
again, I'm not necessarily advocating for it, but that is likely where
things are headed.


This will be a sad day, especially since it seems fundamentally opposed
with any long-term support, on top of taking away user freedom to
fix/tweak their system when Intel won't.


A firmware-based submission model isn't a bad design IMO and, aside from
the firmware freedom issues, I think there are actual advantages to the
model. Immediately, it'll unlock a few features like parallel submission
(more on that in a bit) and long-running compute because they're
implemented in GuC and the work to implement them properly in the
execlist scheduler is highly non-trivial. Longer term, it may (no
guarantees) unlock some performance by getting the kernel out of the way.


Oh, I definitely agree with firmware-based submission model not being a
bad design. I was even cheering for it in 2015. Experience with it made
me regret that deeply since :s

But with the DRM scheduler being responsible for most things, I fail to
see what we could offload in the GuC except context switching (like
every other manufacturer). The problem is, the GuC does way more than
just switching registers in bulk, and if the number of revisions of the
GuC is anything to go by, it is way too complex for me to feel
comfortable with it.


It's more than just bulk register writes. When it comes to 
load-balancing multiple GPU users, firmware can theoretically preempt 
and switch faster leading to more efficient time-slicing. All we really 
need the DRM scheduler for is handling implicit dma_fence dependencies 
between different applications.


Right, this makes sense. However, if the GuC's interface was so simple, 
I doubt it would be at major version 60 already :s


I don't disagree with FW-based command submission, as it has a lot of 
benefits. I just don't like the route of going with a firmware no-one 
else than Intel can work on, *and* one that doesn't seem to concern 
itself with stable interfaces, and how i915 will have to deal with every 
generation using different interfaces (assuming the firmware was bug-free).






In the same vein, I have another concern related to the impact of GuC on
Linux's stable releases. Let's say that in 3 years, a new application
triggers a bug in command submission inside the firmware. Given that the
Linux community cannot patch the GuC firmware, how likely is it that
Intel would release a new GuC version? 

Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-11 Thread Martin Peres




On 11/05/2021 05:58, Dixit, Ashutosh wrote:

On Sun, 09 May 2021 16:11:43 -0700, Jason Ekstrand wrote:


Yes, landing GuC support may be the first step in removing execlist
support. The inevitable reality is that GPU scheduling is coming and
likely to be there only path in the not-too-distant future.  (See also
the ongoing thread with AMD about fences.) I'm not going to pass
judgement on whether or not this is a good thing.  I'm just reading the
winds and, in my view, this is where things are headed for good or ill.

In answer to the question above, the answer to "what do we gain from
GuC?" may soon be, "you get to use your GPU."  We're not there yet and,
again, I'm not necessarily advocating for it, but that is likely where
things are headed.

A firmware-based submission model isn't a bad design IMO and, aside from
the firmware freedom issues, I think there are actual advantages to the
model. Immediately, it'll unlock a few features like parallel submission
(more on that in a bit) and long-running compute because they're
implemented in GuC and the work to implement them properly in the
execlist scheduler is highly non-trivial.  Longer term, it may (no
guarantees) unlock some performance by getting the kernel out of the way.


I believe another main reason for GuC is support for HW based
virtualization like SRIOV. The only way to support SRIOV with execlists
would be to statically partition the GPU between VM's, any dynamic
partitioning needs something in HW.



FW-based command-submission is indeed a win for SRIOV and the current HW 
architecture.


Thanks for chiming in!
Martin


Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-10 Thread Martin Peres

On 10/05/2021 02:11, Jason Ekstrand wrote:

On May 9, 2021 12:12:36 Martin Peres  wrote:


Hi,

On 06/05/2021 22:13, Matthew Brost wrote:

Basic GuC submission support. This is the first bullet point in the
upstreaming plan covered in the following RFC [1].

At a very high level the GuC is a piece of firmware which sits between
the i915 and the GPU. It offloads some of the scheduling of contexts
from the i915 and programs the GPU to submit contexts. The i915
communicates with the GuC and the GuC communicates with the GPU.


May I ask what will GuC command submission do that execlist won't/can't
do? And what would be the impact on users? Even forgetting the troubled
history of GuC (instability, performance regression, poor level of user
support, 6+ years of trying to upstream it...), adding this much code
and doubling the amount of validation needed should come with a
rationale making it feel worth it... and I am not seeing here. Would you
mind providing the rationale behind this work?



GuC submission will be disabled by default on all current upstream
platforms behind a module parameter - enable_guc. A value of 3 will
enable submission and HuC loading via the GuC. GuC submission should
work on all gen11+ platforms assuming the GuC firmware is present.


What is the plan here when it comes to keeping support for execlist? I
am afraid that landing GuC support in Linux is the first step towards
killing the execlist, which would force users to use proprietary
firmwares that even most Intel engineers have little influence over.
Indeed, if "drm/i915/guc: Disable semaphores when using GuC scheduling"
which states "Disable semaphores when using GuC scheduling as semaphores
are broken in the current GuC firmware." is anything to go by, it means
that even Intel developers seem to prefer working around the GuC
firmware, rather than fixing it.


Yes, landing GuC support may be the first step in removing execlist 
support. The inevitable reality is that GPU scheduling is coming and 
likely to be there only path in the not-too-distant future. (See also 
the ongoing thread with AMD about fences.) I'm not going to pass 
judgement on whether or not this is a good thing.  I'm just reading the 
winds and, in my view, this is where things are headed for good or ill.


In answer to the question above, the answer to "what do we gain from 
GuC?" may soon be, "you get to use your GPU."  We're not there yet and, 
again, I'm not necessarily advocating for it, but that is likely where 
things are headed.


This will be a sad day, especially since it seems fundamentally opposed 
with any long-term support, on top of taking away user freedom to 
fix/tweak their system when Intel won't.


A firmware-based submission model isn't a bad design IMO and, aside from 
the firmware freedom issues, I think there are actual advantages to the 
model. Immediately, it'll unlock a few features like parallel submission 
(more on that in a bit) and long-running compute because they're 
implemented in GuC and the work to implement them properly in the 
execlist scheduler is highly non-trivial. Longer term, it may (no 
guarantees) unlock some performance by getting the kernel out of the way.


Oh, I definitely agree with firmware-based submission model not being a 
bad design. I was even cheering for it in 2015. Experience with it made 
me regret that deeply since :s


But with the DRM scheduler being responsible for most things, I fail to 
see what we could offload in the GuC except context switching (like 
every other manufacturer). The problem is, the GuC does way more than 
just switching registers in bulk, and if the number of revisions of the 
GuC is anything to go by, it is way too complex for me to feel 
comfortable with it.





In the same vein, I have another concern related to the impact of GuC on
Linux's stable releases. Let's say that in 3 years, a new application
triggers a bug in command submission inside the firmware. Given that the
Linux community cannot patch the GuC firmware, how likely is it that
Intel would release a new GuC version? That would not be necessarily
such a big problem if newer versions of the GuC could easily be
backported to this potentially-decade-old Linux version, but given that
the GuC seems to have ABI-breaking changes on a monthly cadence (we are
at major version 60 *already*? :o), I would say that it is
highly-unlikely that it would not require potentially-extensive changes
to i915 to make it work, making the fix almost impossible to land in the
stable tree... Do you have a plan to mitigate this problem?

Patches like "drm/i915/guc: Disable bonding extension with GuC
submission" also make me twitch, as this means the two command
submission paths will not be functionally equivalent, and enabling GuC
could thus introduce a user-visible regression (one app used to work,
then stopped working). Could you add in the commit's message a proof
that this would not 

Re: [RFC PATCH 00/97] Basic GuC submission support in the i915

2021-05-09 Thread Martin Peres

Hi,

On 06/05/2021 22:13, Matthew Brost wrote:

Basic GuC submission support. This is the first bullet point in the
upstreaming plan covered in the following RFC [1].

At a very high level the GuC is a piece of firmware which sits between
the i915 and the GPU. It offloads some of the scheduling of contexts
from the i915 and programs the GPU to submit contexts. The i915
communicates with the GuC and the GuC communicates with the GPU.


May I ask what will GuC command submission do that execlist won't/can't 
do? And what would be the impact on users? Even forgetting the troubled 
history of GuC (instability, performance regression, poor level of user 
support, 6+ years of trying to upstream it...), adding this much code 
and doubling the amount of validation needed should come with a 
rationale making it feel worth it... and I am not seeing here. Would you 
mind providing the rationale behind this work?




GuC submission will be disabled by default on all current upstream
platforms behind a module parameter - enable_guc. A value of 3 will
enable submission and HuC loading via the GuC. GuC submission should
work on all gen11+ platforms assuming the GuC firmware is present.


What is the plan here when it comes to keeping support for execlist? I 
am afraid that landing GuC support in Linux is the first step towards 
killing the execlist, which would force users to use proprietary 
firmwares that even most Intel engineers have little influence over. 
Indeed, if "drm/i915/guc: Disable semaphores when using GuC scheduling" 
which states "Disable semaphores when using GuC scheduling as semaphores 
are broken in the current GuC firmware." is anything to go by, it means 
that even Intel developers seem to prefer working around the GuC 
firmware, rather than fixing it.


In the same vein, I have another concern related to the impact of GuC on 
Linux's stable releases. Let's say that in 3 years, a new application 
triggers a bug in command submission inside the firmware. Given that the 
Linux community cannot patch the GuC firmware, how likely is it that 
Intel would release a new GuC version? That would not be necessarily 
such a big problem if newer versions of the GuC could easily be 
backported to this potentially-decade-old Linux version, but given that 
the GuC seems to have ABI-breaking changes on a monthly cadence (we are 
at major version 60 *already*? :o), I would say that it is 
highly-unlikely that it would not require potentially-extensive changes 
to i915 to make it work, making the fix almost impossible to land in the 
stable tree... Do you have a plan to mitigate this problem?


Patches like "drm/i915/guc: Disable bonding extension with GuC 
submission" also make me twitch, as this means the two command 
submission paths will not be functionally equivalent, and enabling GuC 
could thus introduce a user-visible regression (one app used to work, 
then stopped working). Could you add in the commit's message a proof 
that this would not end up being a user regression (in which case, why 
have this codepath to begin with?).


Finally, could you explain why IGT tests need to be modified to work the 
GuC [1], and how much of the code in this series is covered by 
existing/upcoming tests? I would expect a very solid set of tests to 
minimize the maintenance burden, and enable users to reproduce potential 
issues found in this new codepath (too many users run with enable_guc=3, 
as can be seen on Google[2]).


Looking forward to reading up about your plan, and the commitments Intel 
would put in place to make this feature something users should be 
looking forward to rather than fear.


Thanks,
Martin

[2] https://www.google.com/search?q=enable_guc%3D3



This is a huge series and it is completely unrealistic to merge all of
these patches at once. Fortunately I believe we can break down the
series into different merges:

1. Merge Chris Wilson's patches. These have already been reviewed
upstream and I fully agree with these patches as a precursor to GuC
submission.

2. Update to GuC 60.1.2. These are largely Michal's patches.

3. Turn on GuC/HuC auto mode by default.

4. Additional patches needed to support GuC submission. This is any
patch not covered by 1-3 in the first 34 patches. e.g. 'Engine relative
MMIO'

5. GuC submission support. Patches number 35+. These all don't have to
merge at once though as we don't actually allow GuC submission until the
last patch of this series.

[1] https://patchwork.freedesktop.org/patch/432206/?series=89840=1

Signed-off-by: Matthew Brost 

Chris Wilson (3):
   drm/i915/gt: Move engine setup out of set_default_submission
   drm/i915/gt: Move submission_method into intel_gt
   drm/i915/gt: Move CS interrupt handler to the backend

Daniele Ceraolo Spurio (6):
   drm/i915/guc: skip disabling CTBs before sanitizing the GuC
   drm/i915/guc: use probe_error log for CT enablement failure
   drm/i915/guc: enable only the user interrupt when using GuC submission
   drm/i915/uc: 

Re: [PATCH] drm/ttm: fix unused function warning

2020-12-09 Thread Martin Peres

On 04/12/2020 18:51, Arnd Bergmann wrote:

From: Arnd Bergmann 

ttm_pool_type_count() is not used when debugfs is disabled:

drivers/gpu/drm/ttm/ttm_pool.c:243:21: error: unused function 
'ttm_pool_type_count' [-Werror,-Wunused-function]
static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt)

Move the definition into the #ifdef block.

Fixes: d099fc8f540a ("drm/ttm: new TT backend allocation pool v3")
Signed-off-by: Arnd Bergmann 


Thanks Arnd! The patch looks good to me:

Reviewed-by: Martin Peres 


---
  drivers/gpu/drm/ttm/ttm_pool.c | 29 ++---
  1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 5455b2044759..7b2f60616750 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -239,21 +239,6 @@ static struct page *ttm_pool_type_take(struct 
ttm_pool_type *pt)
return p;
  }
  
-/* Count the number of pages available in a pool_type */

-static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt)
-{
-   unsigned int count = 0;
-   struct page *p;
-
-   spin_lock(>lock);
-   /* Only used for debugfs, the overhead doesn't matter */
-   list_for_each_entry(p, >pages, lru)
-   ++count;
-   spin_unlock(>lock);
-
-   return count;
-}
-
  /* Initialize and add a pool type to the global shrinker list */
  static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool 
*pool,
   enum ttm_caching caching, unsigned int order)
@@ -543,6 +528,20 @@ void ttm_pool_fini(struct ttm_pool *pool)
  EXPORT_SYMBOL(ttm_pool_fini);
  
  #ifdef CONFIG_DEBUG_FS

+/* Count the number of pages available in a pool_type */
+static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt)
+{
+   unsigned int count = 0;
+   struct page *p;
+
+   spin_lock(>lock);
+   /* Only used for debugfs, the overhead doesn't matter */
+   list_for_each_entry(p, >pages, lru)
+   ++count;
+   spin_unlock(>lock);
+
+   return count;
+}
  
  /* Dump information about the different pool types */

  static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt,


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/ttm: fix missing NULL check in the new page pool

2020-11-09 Thread Martin Peres

On 09/11/2020 17:54, Alex Deucher wrote:

On Fri, Nov 6, 2020 at 9:10 AM Christian König
 wrote:


The pool parameter can be NULL if we free through the shrinker.

Signed-off-by: Christian König 


Does this fix:
https://gitlab.freedesktop.org/drm/amd/-/issues/1362


It does! So you can add my:

Tested-by: Martin Peres 
Acked-by: Martin Peres 

Thanks,
Martin



Acked-by: Alex Deucher 




---
  drivers/gpu/drm/ttm/ttm_pool.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 44ec41aa78d6..1b96780b4989 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -135,7 +135,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum 
ttm_caching caching,
 set_pages_wb(p, 1 << order);
  #endif

-   if (!pool->use_dma_alloc) {
+   if (!pool || !pool->use_dma_alloc) {
 __free_pages(p, order);
 return;
 }
--
2.25.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH i-g-t] tests/kms_crtc_background_color: overhaul to match upstream ABI (v5.1)

2019-09-30 Thread Martin Peres
On 30/09/2019 19:13, Matt Roper wrote:
> CRTC background color kernel patches were written about 2.5 years ago
> and floated on the upstream mailing list, but since no opensource
> userspace materialized, we never actually merged them.  However the
> corresponding IGT test did get merged and has basically been dead code
> ever since.
> 
> A couple years later we finally have an open source userspace
> (ChromeOS), so lets update the IGT test to match the ABI that's actually
> going upstream and to remove some of the cruft from the original test
> that wouldn't actually work.
> 
> It's worth noting that CRC's don't seem to work properly on Intel gen9.
> The bspec does tell us that we must have one plane enabled before taking
> a pipe-level ("dmux") CRC, so this test is violating that by disabling
> all planes; it's quite possible that this is the source of the CRC
> mismatch.  If running on an Intel platform, you may want to run in
> interactive mode ("--interactive-debug=bgcolor --skip-crc-compare") to
> ensure that the colors being generated actually do match visually since
> the CRC's can't be trusted.

Hmm, landing a feature without automating testing for it is a bit too
much to ask IMO.

I have two proposals to make it happen:

- Could we add a workaround for the affected intel platforms? If the
problem really is because no planes are enabled, then surely a
fully-transparent plane would be a sufficient workaround.

- If CRCs really cannot be used for this, then we should use the
chamelium for it. The idea would be to detect if the connector is
connected to a chamelium, and if so use chamelium's CRC.

How does this sound?

Martin

> 
> v2:
>  - Swap red and blue ordering in property value to reflect change
>in v2 of kernel series.
> 
> v3:
>  - Minor updates to proposed uapi helpers (s/rgba/argb/).
> 
> v4:
>  - General restructuring into pipe/color subtests.
>  - Use RGB2101010 framebuffers for comparison so that we match the bits
>of precision that Intel hardware background color accepts
> 
> v5:
>  - Re-enable CRC comparisons; just because these don't work on Intel
>doesn't mean we shouldn't use them on other vendors' platforms
>(Daniel).
>  - Use DRIVER_ANY rather than DRIVER_INTEL. (Heiko Stuebner)
> 
> v5.1:
>  - Update commit message body discussion of CRC issues.
> 
> Cc: igt-...@lists.freedesktop.org
> Cc: Daniel Vetter 
> Cc: Heiko Stuebner 
> Signed-off-by: Matt Roper 
> ---
>  lib/igt_kms.c |   2 +-
>  tests/kms_crtc_background_color.c | 263 ++
>  2 files changed, 127 insertions(+), 138 deletions(-)
> 
> diff --git a/lib/igt_kms.c b/lib/igt_kms.c
> index e9b80b9b..9a7f0e23 100644
> --- a/lib/igt_kms.c
> +++ b/lib/igt_kms.c
> @@ -391,7 +391,7 @@ const char * const 
> igt_plane_prop_names[IGT_NUM_PLANE_PROPS] = {
>  };
>  
>  const char * const igt_crtc_prop_names[IGT_NUM_CRTC_PROPS] = {
> - [IGT_CRTC_BACKGROUND] = "background_color",
> + [IGT_CRTC_BACKGROUND] = "BACKGROUND_COLOR",
>   [IGT_CRTC_CTM] = "CTM",
>   [IGT_CRTC_GAMMA_LUT] = "GAMMA_LUT",
>   [IGT_CRTC_GAMMA_LUT_SIZE] = "GAMMA_LUT_SIZE",
> diff --git a/tests/kms_crtc_background_color.c 
> b/tests/kms_crtc_background_color.c
> index 3df3401f..58cdd7a9 100644
> --- a/tests/kms_crtc_background_color.c
> +++ b/tests/kms_crtc_background_color.c
> @@ -25,164 +25,153 @@
>  #include "igt.h"
>  #include 
>  
> -
>  IGT_TEST_DESCRIPTION("Test crtc background color feature");
>  
> +/*
> + * Paints a desired color into a full-screen primary plane and then compares
> + * that CRC with turning off all planes and setting the CRTC background to 
> the
> + * same color.
> + *
> + * At least on current Intel platforms, the CRC tests appear to always fail,
> + * even though the resulting pipe output seems to be the same.  The bspec 
> tells
> + * us that we must have at least one plane turned on when taking a pipe-level
> + * CRC, so the fact that we're violating that may explain the failures.  If
> + * your platform gives failures for all tests, you may want to run with
> + * --interactive-debug=bgcolor --skip-crc-compare to visually inspect that 
> the
> + * background color matches the equivalent solid plane.
> + */
> +
>  typedef struct {
> - int gfx_fd;
>   igt_display_t display;
> - struct igt_fb fb;
> - igt_crc_t ref_crc;
> + int gfx_fd;
> + igt_output_t *output;
>   igt_pipe_crc_t *pipe_crc;
> + drmModeModeInfo *mode;
>  } data_t;
>  
> -#define BLACK  0x00   /* BGR 8bpc */
> -#define CYAN   0x00   /* BGR 8bpc */
> -#define PURPLE 0xFF00FF   /* BGR 8bpc */
> -#define WHITE  0xFF   /* BGR 8bpc */
> -
> -#define BLACK640x /* BGR 16bpc */
> -#define CYAN64 0x /* BGR 16bpc */
> -#define PURPLE64   0x /* BGR 16bpc */
> -#define YELLOW64   0x /* BGR 16bpc */
> -#define WHITE640x /* BGR 

Re: [Intel-gfx] [PATCH v3 00/11] drm/fb-helper: Move modesetting code to drm_client

2019-04-23 Thread Martin Peres
On 20/04/2019 20:24, Noralf Trønnes wrote:
> 
> 
> Den 20.04.2019 12.45, skrev Noralf Trønnes:
>> This moves the modesetting code from drm_fb_helper to drm_client so it
>> can be shared by all internal clients.
>>
>> Changes this time:
>> - Use full drm_client_init/release for the modesets (Daniel Vetter)
>> - drm_client_for_each_modeset: use lockdep_assert_held (Daniel Vetter)
>> - Hook up to Documentation/gpu/drm-client.rst (Daniel Vetter)
>>
> 
> I got Fi.CI.IGT failures on this one:
> 
>   * igt@kms_fbcon_fbt@psr:
> - shard-skl:  PASS -> FAIL
> 
>   * igt@kms_fbcon_fbt@psr-suspend:
> - shard-iclb: PASS -> FAIL +1
> - shard-skl:  NOTRUN -> FAIL
> 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_12850/
> https://patchwork.freedesktop.org/series/58597/
> 
> The previous version of this series reported success:
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_12720/
> But AFAICT those fbcon tests didn't succeed when I look at the details.
> 
> I'd appreciate if someone with Intel CI knowledge could have a look at this.

The issue is real, but I honestly can't tell if this is due to your
patches or not. There was a regression last week and we reworked some
filters that may not apply anymore to the base that was selected to test
your patches.

I queued a re-run! We should have the results in the next 6-12 hours.

Sorry for the delay!
Martin

> 
> Noralf.
> 
>> Noralf.
>>
>> Noralf Trønnes (11):
>>   drm/atomic: Move __drm_atomic_helper_disable_plane/set_config()
>>   drm/fb-helper: Avoid race with DRM userspace
>>   drm/fb-helper: No need to cache rotation and sw_rotations
>>   drm/fb-helper: Remove drm_fb_helper_crtc->{x,y,desired_mode}
>>   drm/fb-helper: Remove drm_fb_helper_crtc
>>   drm/fb-helper: Prepare to move out commit code
>>   drm/fb-helper: Move out commit code
>>   drm/fb-helper: Remove drm_fb_helper_connector
>>   drm/fb-helper: Prepare to move out modeset config code
>>   drm/fb-helper: Move out modeset config code
>>   drm/client: Hack: Add bootsplash example
>>
>>  Documentation/gpu/drm-client.rst |3 +
>>  Documentation/gpu/todo.rst   |   10 +
>>  drivers/gpu/drm/Kconfig  |5 +
>>  drivers/gpu/drm/Makefile |3 +-
>>  drivers/gpu/drm/drm_atomic.c |  168 
>>  drivers/gpu/drm/drm_atomic_helper.c  |  164 ---
>>  drivers/gpu/drm/drm_auth.c   |   20 +
>>  drivers/gpu/drm/drm_bootsplash.c |  362 +++
>>  drivers/gpu/drm/drm_client.c |   17 +-
>>  drivers/gpu/drm/drm_client_modeset.c | 1085 
>>  drivers/gpu/drm/drm_crtc_internal.h  |5 +
>>  drivers/gpu/drm/drm_drv.c|4 +
>>  drivers/gpu/drm/drm_fb_helper.c  | 1381 +++---
>>  drivers/gpu/drm/drm_internal.h   |2 +
>>  include/drm/drm_atomic_helper.h  |4 -
>>  include/drm/drm_client.h |   49 +
>>  include/drm/drm_fb_helper.h  |  102 +-
>>  17 files changed, 1864 insertions(+), 1520 deletions(-)
>>  create mode 100644 drivers/gpu/drm/drm_bootsplash.c
>>  create mode 100644 drivers/gpu/drm/drm_client_modeset.c
>>
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [Nouveau] [PATCH] drm/nouveau: Don't disable polling in fallback mode

2018-09-14 Thread Martin Peres
On 14/09/2018 10:28, Ben Skeggs wrote:
> On Wed, 12 Sep 2018 at 20:59, Takashi Iwai  wrote:
>>
>> When a fan is controlled via linear fallback without cstate, we
>> shouldn't stop polling.  Otherwise it won't be adjusted again and
>> keeps running at an initial crazy pace.
> Martin,
> 
> Any thoughts on this?
> 
> Ben.

Wow, blast from the past!

Anyway, the analysis is pretty spot on here. When using the cstate-based
fan speed (change the speed of the fan based on what frequency is used),
then polling is unnecessary and this function should only be called when
changing the pstate.

However, in the absence of ANY information, we fallback to a
temperature-based management which requires constant polling, so the
patch is accurate and poll = false should only be set if we have a cstate.

So, the patch is Reviewed-by: Martin Peres 

> 
>>
>> Fixes: 800efb4c2857 ("drm/nouveau/drm/therm/fan: add a fallback if no fan 
>> control is specified in the vbios")
>> Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1103356
>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107447

I see that Thomas has been having issues with the noise level anyway. I
suggest he should bump the value of temp1_auto_point1_temp (see
https://www.kernel.org/doc/Documentation/thermal/nouveau_thermal).

The default value is set to 90°C which is quite safe on these old GPUs
(NVIDIA G71 / nv49). I would say that it is safe to go up to 110°C.
Which should reduce the noise level.

Another technique may be to reduce the minimum fan speed to something
lower than 30°C. It should increase the slope but reduce the noise level
at a given temperature.

One reason why these GPUs run so hot on nouveau is the lack of power and
clock gating. I am sorry that I never finished to reverse engineer these...

Anyway, thanks a lot for the patch!

>> Reported-by: Thomas Blume 
>> Signed-off-by: Takashi Iwai 
>>
>> ---
>>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c | 7 ---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c 
>> b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
>> index 3695cde669f8..07914e36939e 100644
>> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
>> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
>> @@ -132,11 +132,12 @@ nvkm_therm_update(struct nvkm_therm *therm, int mode)
>> duty = nvkm_therm_update_linear(therm);
>> break;
>> case NVBIOS_THERM_FAN_OTHER:
>> -   if (therm->cstate)
>> +   if (therm->cstate) {
>> duty = therm->cstate;
>> -   else
>> +   poll = false;
>> +   } else {
>> duty = 
>> nvkm_therm_update_linear_fallback(therm);
>> -   poll = false;
>> +   }
>> break;
>> }
>> immd = false;
>> --
>> 2.18.0
>>
>> ___
>> Nouveau mailing list
>> nouv...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/nouveau

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Aspirant for GSOC 2018 for Nouveau Vulkan driver

2018-03-07 Thread Martin Peres
Hi Anusha,

Sorry, I was under the expectation that userspace developers would
answer you after your first message, and I missed your second one! My
sincere apologies.

Generally, the process is that the student should research the topic by
first asking questions to developers about the effort, then they would
make their own plan. Then you would present this plan while looking for
a mentor.

Of course, you can expect help from the community in order to better
understand what needs to be done, but don't expect to be given a
detailed plan, just pointers.

Sorry again for my lack of feedback,
Martin

On 07/03/18 05:53, Anusha Srivastava wrote:
> Hi,
> 
> I am not been able to contact with mentor of this project.
> Can someone else from the community help me with this ?
> 
> 
> Regards,
> Anusha Srivastava
> 
> 
> On 3 March 2018 at 11:16, Anusha Srivastava  wrote:
>> Hi Martin,
>>
>> Any update on this ?
>> Regards,
>> Anusha Srivastava
>>
>>
>> On 28 February 2018 at 23:37, Anusha Srivastava  
>> wrote:
>>> Hi,
>>>
>>> I would like to participate in  GSOC 2018 with Xorg to contribute to
>>> project "Initial Nouveau Vulkan driver'
>>> I would need some help in how to get started with the same.
>>>
>>> Regards,
>>> Anusha Srivastava

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Google Summer of Code 2018

2018-02-26 Thread Martin Peres
Hi everyone,

Just a quick word to remind you that the X.Org Foundation got accepted
to the Google Summer of Code 2018!

As a potential mentor, if you have a project falling under the
foundation's (large) umbrella that you would like to kick start or get
help finishing, please add it to the list on the ideas page[1] as soon
as possible. Students will start applying on the 12th of March, see the
full timeline[2].

As a student, check out the projects our ideas' page[1]. If you find one
that interests you, check out our application guidelines[3] and, if you
are eligible, contact your potential mentor and start discussing right
away with him/her. We welcome any student who finds the prospect of
working on our Open Source Graphics stack exciting!

I will be once again the primary contact for the X.Org Foundation, so
please ask me anything by email or on IRC (mupuf on freenode). Other
administrators are Alex Deucher and Taylor Campbell (both Cc:ed).

Looking forward to interacting with old and new contributors, and maybe
welcome previous GSoC/EVoC students as mentors now that they grew to
become core contributors and/or have full-time jobs in the Graphics
stack (*wink wink nudge nudge*)!

Martin, on behalf of the X.Org Foundation

[1] https://www.x.org/wiki/SummerOfCodeIdeas/
[2] https://developers.google.com/open-source/gsoc/timeline
[3] https://www.x.org/wiki/GSoCApplication/
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v3 1/4] drm/nouveau: Add support for basic clockgating on Kepler1

2018-01-27 Thread Martin Peres
 @@ -1760,7 +1761,7 @@ nve7_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk104_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1824,7 +1825,7 @@ nvf0_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk110_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1862,7 +1863,7 @@ nvf1_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk110_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1900,7 +1901,7 @@ nv106_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk208_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1938,7 +1939,7 @@ nv108_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk208_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -2508,6 +2509,7 @@ nvkm_device_fini(struct nvkm_device *device, bool 
> suspend)
>   }
>   }
>  
> + nvkm_therm_clkgate_fini(device->therm, suspend);
>  
>   if (device->func->fini)
>   device->func->fini(device, suspend);
> @@ -2597,6 +2599,7 @@ nvkm_device_init(struct nvkm_device *device)
>   }
>  
>   nvkm_acpi_init(device);
> + nvkm_therm_clkgate_enable(device->therm);
>  
>   time = ktime_to_us(ktime_get()) - time;
>   nvdev_trace(device, "init completed in %lldus\n", time);
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild 
> b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> index 7ba56b12badd..4bac4772d8ed 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> @@ -10,6 +10,7 @@ nvkm-y += nvkm/subdev/therm/nv50.o
>  nvkm-y += nvkm/subdev/therm/g84.o
>  nvkm-y += nvkm/subdev/therm/gt215.o
>  nvkm-y += nvkm/subdev/therm/gf119.o
> +nvkm-y += nvkm/subdev/therm/gk104.o
>  nvkm-y += nvkm/subdev/therm/gm107.o
>  nvkm-y += nvkm/subdev/therm/gm200.o
>  nvkm-y += nvkm/subdev/therm/gp100.o
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c 
> b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> index f27fc6d0d4c6..e4c96e46db8f 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> @@ -21,6 +21,7 @@
>   *
>   * Authors: Martin Peres
>   */
> +#include 
>  #include "priv.h"
>  
>  int
> @@ -297,6 +298,38 @@ nvkm_therm_attr_set(struct nvkm_therm *therm,
>   return -EINVAL;
>  }
>  
> +void
> +nvkm_therm_clkgate_enable(struct nvkm_therm *therm)
> +{
> + if (!therm->func->clkgate_enable || !therm->clkgating_enabled)
> + return;
> +
> + nvkm_debug(>subdev,
> +"Enabling clockgating\n");
> + therm->func->clkgate_enable(therm);
> +}
> +
> +void
> +nvkm_therm_clkgate_fini(struct nvkm_therm *therm, bool suspend)
> +{
> + if (!therm->func->clkgate_fini || !therm->clkgating_enabled)
> + return;
> +
> + nvkm_debug(>subdev,
> +"Preparing clockgating for %s\n",
> +suspend ? "suspend" : "fini");
> + therm->func->clkgate_fini(therm, suspend);
> +}
> +
> +static void
> +nvkm_therm_clkgate_oneinit(struct nvkm_therm *therm)
> +{
> + if (!therm->func->clkgate_enable || !therm->clkgating_enabled)
> + return;
> +
> + nvkm_info(>subdev, "Clockgating enabled\n");

Thanks for adding this!

> +}
> +
>  static void
>  nvkm_therm_intr(struct nvkm_subdev *subdev)
>  {
> @@ -333,6 +366,7 @@ nvkm_therm_oneinit(struct nvkm_subdev *subdev)
>   nvkm_therm_fan_ctor(therm);
>   nvkm_therm_fan_mode(therm, NVKM_THERM_CTRL_AUTO);
>   nvkm_therm_sensor_preinit(therm);
> + nvkm_therm_clkgate_oneinit(therm);
>   return 0;
>  }
>  
> @@ -374,15 +408,10 @@ nvkm_therm = {
>   .intr = nvkm_therm_intr,
>  };
&

Re: [RFC v3 2/4] drm/nouveau: Add support for BLCG on Kepler1

2018-01-27 Thread Martin Peres
On 26/01/18 22:59, Lyude Paul wrote:
> This enables BLCG optimization for kepler1. When using clockgating,
> nvidia's firmware has a set of registers which are initially programmed
> by the vbios with various engine delays and other mysterious settings
> that are safe enough to bring up the GPU. However, the values used by
> the vbios are more power hungry then they need to be, so the nvidia driver

then -> than.

With the comment about not exposing clock gating until patch 2, 3, and 4
have landed addressed, the series is:

Reviewed-by: Martin Peres <martin.pe...@free.fr>

Thanks a lot! I really like how this turned out :)

> writes it's own more optimized set of BLCG settings before enabling
> CG_CTRL. This adds support for programming the optimized BLCG values
> during engine/subdev init, which enables rather significant power
> savings.
> 
> This introduces the nvkm_therm_clkgate_init() helper, which we use to
> program the optimized BLCG settings before enabling clockgating with
> nvkm_therm_clkgate_enable.
> 
> As well, this commit shares a lot more code with Fermi since BLCG is
> mostly the same there as far as we can tell. In the future, it's likely
> we'll reformat the clkgate_packs for kepler1 so that they share a list
> of mmio packs with Fermi.
> 
> Signed-off-by: Lyude Paul <ly...@redhat.com>
> ---
>  .../gpu/drm/nouveau/include/nvkm/subdev/therm.h|  12 ++
>  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h |   1 +
>  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c | 207 
> +
>  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.h |  55 ++
>  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.c |   6 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.c |  47 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.h |  35 
>  drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h  |   2 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild   |   1 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c   |  10 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gf100.c  |  75 
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gk104.c  |   1 +
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gt215.c  |   2 +-
>  drivers/gpu/drm/nouveau/nvkm/subdev/therm/priv.h   |   8 +
>  14 files changed, 461 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.h
>  create mode 100644 drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.h
>  create mode 100644 drivers/gpu/drm/nouveau/nvkm/subdev/therm/gf100.c
> 
> diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/therm.h 
> b/drivers/gpu/drm/nouveau/include/nvkm/subdev/therm.h
> index 240b19bb4667..9398d9f09339 100644
> --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/therm.h
> +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/therm.h
> @@ -46,6 +46,16 @@ enum nvkm_therm_attr_type {
>   NVKM_THERM_ATTR_THRS_SHUTDOWN_HYST = 17,
>  };
>  
> +struct nvkm_therm_clkgate_init {
> + u32 addr;
> + u8  count;
> + u32 data;
> +};
> +
> +struct nvkm_therm_clkgate_pack {
> + const struct nvkm_therm_clkgate_init *init;
> +};
> +
>  struct nvkm_therm {
>   const struct nvkm_therm_func *func;
>   struct nvkm_subdev subdev;
> @@ -92,6 +102,8 @@ struct nvkm_therm {
>  int nvkm_therm_temp_get(struct nvkm_therm *);
>  int nvkm_therm_fan_sense(struct nvkm_therm *);
>  int nvkm_therm_cstate(struct nvkm_therm *, int, int);
> +void nvkm_therm_clkgate_init(struct nvkm_therm *,
> +  const struct nvkm_therm_clkgate_pack *);
>  void nvkm_therm_clkgate_enable(struct nvkm_therm *);
>  void nvkm_therm_clkgate_fini(struct nvkm_therm *, bool);
>  
> diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h 
> b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
> index d7c2adb9b543..c8ec3fd97155 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
> +++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
> @@ -137,6 +137,7 @@ struct gf100_gr_func {
>   int (*rops)(struct gf100_gr *);
>   int ppc_nr;
>   const struct gf100_grctx_func *grctx;
> + const struct nvkm_therm_clkgate_pack *clkgate_pack;
>   struct nvkm_sclass sclass[];
>  };
>  
> diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c 
> b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
> index 5e82f94c2245..17cea9c70f7f 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
> @@ -22,6 +22,7 @@
>   * Authors: Ben Skeggs <bske...@redhat.com>
>   */
>  #include "gf100.h"
> +#include "gk104.h"
>  #include "ctxgf100.h"
>  
>  #include 
> @@ -173,6 +174,208 @@ gk104_gr_pack_mmio[] = {

Re: [RFC 1/4] drm/nouveau: Add support for basic clockgating on Kepler1

2018-01-21 Thread Martin Peres
gm200_therm_new(struct nvkm_device *, int, struct nvkm_therm **);
>  int gp100_therm_new(struct nvkm_device *, int, struct nvkm_therm **);
> diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c 
> b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
> index 00eeaaffeae5..6c5f966c66ad 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
> @@ -28,6 +28,7 @@
>  #include 
>  
>  #include 
> +#include 
>  
>  static DEFINE_MUTEX(nv_devices_mutex);
>  static LIST_HEAD(nv_devices);
> @@ -1682,7 +1683,7 @@ nve4_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk104_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1721,7 +1722,7 @@ nve6_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk104_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1760,7 +1761,7 @@ nve7_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk104_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1824,7 +1825,7 @@ nvf0_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk110_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1862,7 +1863,7 @@ nvf1_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk110_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1900,7 +1901,7 @@ nv106_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk208_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -1938,7 +1939,7 @@ nv108_chipset = {
>   .mxm = nv50_mxm_new,
>   .pci = gk104_pci_new,
>   .pmu = gk208_pmu_new,
> - .therm = gf119_therm_new,
> + .therm = gk104_therm_new,
>   .timer = nv41_timer_new,
>   .top = gk104_top_new,
>   .volt = gk104_volt_new,
> @@ -2508,6 +2509,7 @@ nvkm_device_fini(struct nvkm_device *device, bool 
> suspend)
>   }
>   }
>  
> + nvkm_therm_clkgate_fini(device->therm, suspend);
>  
>   if (device->func->fini)
>   device->func->fini(device, suspend);
> @@ -2597,6 +2599,7 @@ nvkm_device_init(struct nvkm_device *device)
>   }
>  
>   nvkm_acpi_init(device);
> + nvkm_therm_clkgate_enable(device->therm);
>  
>   time = ktime_to_us(ktime_get()) - time;
>   nvdev_trace(device, "init completed in %lldus\n", time);
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild 
> b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> index 7ba56b12badd..4bac4772d8ed 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/Kbuild
> @@ -10,6 +10,7 @@ nvkm-y += nvkm/subdev/therm/nv50.o
>  nvkm-y += nvkm/subdev/therm/g84.o
>  nvkm-y += nvkm/subdev/therm/gt215.o
>  nvkm-y += nvkm/subdev/therm/gf119.o
> +nvkm-y += nvkm/subdev/therm/gk104.o
>  nvkm-y += nvkm/subdev/therm/gm107.o
>  nvkm-y += nvkm/subdev/therm/gm200.o
>  nvkm-y += nvkm/subdev/therm/gp100.o
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c 
> b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> index f27fc6d0d4c6..ba1a3aabb559 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
> @@ -21,6 +21,7 @@
>   *
>   * Authors: Martin Peres
>   */
> +#include 
>  #include "priv.h"
>  
>  int
> @@ -297,6 +298,47 @@ nvkm_therm_attr_set(struct nvkm_therm *therm,
>   return -EINVAL;
>  }
>  
> +void
> +nvkm_therm_clkgate_enable(struct nvkm_therm *therm)
> +{
> + if (!therm->func->clkgate_enable || !therm->clkgate_level)
> + return;
> +
> + nvkm_debug(>subdev,
> +"Enabling clock/powergating\n");
> + therm->func->clkgate_enable(therm);
> +}
> +

Re: [PATCH] drm/nouveau: Document nouveau support for Tegra in DRIVER_DESC

2017-10-01 Thread Martin Peres
On 11/08/17 13:56, Rhys Kidd wrote:
> nouveau supports the Tegra K1 and higher after the SoC-based GPUs converged
> with the main GeForce GPU families.
> 
> Signed-off-by: Rhys Kidd <rhysk...@gmail.com>
> ---
>  drivers/gpu/drm/nouveau/nouveau_drv.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h 
> b/drivers/gpu/drm/nouveau/nouveau_drv.h
> index 822fe1d4d35e..fbe42ec0a5f1 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_drv.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
> @@ -5,7 +5,7 @@
>  #define DRIVER_EMAIL "nouv...@lists.freedesktop.org"
>  
>  #define DRIVER_NAME  "nouveau"
> -#define DRIVER_DESC  "nVidia Riva/TNT/GeForce/Quadro/Tesla"
> +#define DRIVER_DESC  "nVidia Riva/TNT/GeForce/Quadro/Tesla/Tegra"

The problem is that we do not support all Tegras. How about Tegra TK1+?

I really do not care much about this string, but I guess I probably
should a bit.

With the TK1 added, this is:

Reviewed-by" Martin Peres <martin.pe...@free.fr>

>  #define DRIVER_DATE  "20120801"
>  
>  #define DRIVER_MAJOR 1
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH] drm/nouveau/therm: fix spelling mistake on array thresolds

2017-06-27 Thread Martin Peres

On 27/06/17 11:08, Colin King wrote:

From: Colin Ian King <colin.k...@canonical.com>

Array thresolds should be named thresholds, rename it. Also make it static
static const char * const

Signed-off-by: Colin Ian King <colin.k...@canonical.com>


Thanks for doing this,

Reviewed-by: Martin Peres <martin.pe...@free.fr>


---
  drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.c 
b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.c
index e93b2410c38b..6449771b9dc6 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.c
@@ -83,7 +83,7 @@ nvkm_therm_sensor_event(struct nvkm_therm *therm, enum 
nvkm_therm_thrs thrs,
  {
struct nvkm_subdev *subdev = >subdev;
bool active;
-   const char *thresolds[] = {
+   static const char * const thresholds[] = {
"fanboost", "downclock", "critical", "shutdown"
};
int temperature = therm->func->temp_get(therm);
@@ -94,10 +94,10 @@ nvkm_therm_sensor_event(struct nvkm_therm *therm, enum 
nvkm_therm_thrs thrs,
if (dir == NVKM_THERM_THRS_FALLING)
nvkm_info(subdev,
  "temperature (%i C) went below the '%s' threshold\n",
- temperature, thresolds[thrs]);
+ temperature, thresholds[thrs]);
else
nvkm_info(subdev, "temperature (%i C) hit the '%s' threshold\n",
- temperature, thresolds[thrs]);
+ temperature, thresholds[thrs]);
  
  	active = (dir == NVKM_THERM_THRS_RISING);

switch (thrs) {



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


XDC 2017 : Call for paper

2017-06-06 Thread Martin Peres
Hello,

I have the pleasure to announce that the X.org Developer Conference 2017
will be held in Mountain View, California from September 20th to
September 22nd. The venue is located at the Googleplex.

The official page for the event is http://www.x.org/wiki/Events/XDC2017
while the call for paper is at http://www.x.org/wiki/Other/Press/CFP2017/

As usual, we are open to talks across the layers of the graphics stack,
from the kernel to desktop environments / graphical applications and
about how to make things better for the developers who build them.
Given that the conference is located at Google, we would welcome topics
related to Android and Chromebooks. We would also like to hear about
Virtual Reality and end-to-end buffer format negociation. If you're not
sure if something might fit, mail me or add it to the ideas list found
in the program page.

The conference is free of charge and open to the general public. If
you plan on coming, please add yourself to the attendees list. We'll
use this list to make badges and plan for the catering, so if you are
attending please add your name as early as possible.

I am looking forward to seeing you there. If you have any
inquiries/questions, please send them to Stéphane Marchesin (please also
CC: board at foundation.x.org).

Martin Peres
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] Implementing Miracast

2017-06-01 Thread Martin Peres

On 01/06/17 09:02, Daniel Kasak wrote:

Any news? Seems every TV I bump into these days has Miracast support ...


Sorry, it must be frustrating. Some code needs to be re-implemented 
because it was GPL-based and we need it as MIT. Until I have some time 
to do this, this will probably not move forward :s


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Mesa-dev] Requests for Proposal for hosting XDC 2018

2017-05-12 Thread Martin Peres
On 12/05/17 02:46, Manasi Navare wrote:
> Hi Daniel,
> 
> Is Call for Papers opened yet for XDC? When do they usually start accepting 
> proposals?

Hey Manasi,

Our call for paper is usually sent around June with a deadline for
mid-August. Here is the call for paper from last year, we do expect it
to change: https://www.x.org/wiki/Other/Press/CFP2016/

Hope that helps :)

Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH v5 5/5] nouveau_hwmon: Change permissions to numeric

2017-05-01 Thread Martin Peres
On 26/04/17 19:46, Oscar Salvador wrote:
> This patch replaces the symbolic permissions with the numeric ones,
> and adds me to the authors too.
> 
> Signed-off-by: Oscar Salvador <osalvador.vilard...@gmail.com>


> ---
>  drivers/gpu/drm/nouveau/nouveau_hwmon.c | 14 +++---
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_hwmon.c 
> b/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> index 9142779..45b5c85 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> @@ -1,5 +1,6 @@
>  /*
> - * Copyright 2010 Red Hat Inc.
> + * Copyright 2010 Red Hat Inc. (Ben Skeggs)

Please drop this change.

> + * Copyright 2017 Oscar Salvador
>   *
>   * Permission is hereby granted, free of charge, to any person obtaining a
>   * copy of this software and associated documentation files (the "Software"),
> @@ -19,7 +20,6 @@
>   * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>   * OTHER DEALINGS IN THE SOFTWARE.
>   *
> - * Authors: Ben Skeggs

You can't remove people as being the author of something. Just add
yourself if you care about this (I did not care to add my name when I
wrote in this file, because the git history makes more sense nowadays.

Otherwise, I have no strong opinions on this patch. I guess the numeric
representation is easier to read, so I will give you my R-b for this and
let others decide what to do:

Reviewed-by: Martin Peres <martin.pe...@free.fr>

>   */
>  
>  #ifdef CONFIG_ACPI
> @@ -56,7 +56,7 @@ nouveau_hwmon_show_temp1_auto_point1_pwm(struct device *d,
>  {
>   return snprintf(buf, PAGE_SIZE, "%d\n", 100);
>  }
> -static SENSOR_DEVICE_ATTR(temp1_auto_point1_pwm, S_IRUGO,
> +static SENSOR_DEVICE_ATTR(temp1_auto_point1_pwm, 0444,
> nouveau_hwmon_show_temp1_auto_point1_pwm, NULL, 0);
>  
>  static ssize_t
> @@ -88,7 +88,7 @@ nouveau_hwmon_set_temp1_auto_point1_temp(struct device *d,
>  
>   return count;
>  }
> -static SENSOR_DEVICE_ATTR(temp1_auto_point1_temp, S_IRUGO | S_IWUSR,
> +static SENSOR_DEVICE_ATTR(temp1_auto_point1_temp, 0644,
> nouveau_hwmon_temp1_auto_point1_temp,
> nouveau_hwmon_set_temp1_auto_point1_temp, 0);
>  
> @@ -121,7 +121,7 @@ nouveau_hwmon_set_temp1_auto_point1_temp_hyst(struct 
> device *d,
>  
>   return count;
>  }
> -static SENSOR_DEVICE_ATTR(temp1_auto_point1_temp_hyst, S_IRUGO | S_IWUSR,
> +static SENSOR_DEVICE_ATTR(temp1_auto_point1_temp_hyst, 0644,
> nouveau_hwmon_temp1_auto_point1_temp_hyst,
> nouveau_hwmon_set_temp1_auto_point1_temp_hyst, 0);
>  
> @@ -255,7 +255,7 @@ nouveau_hwmon_set_pwm1_min(struct device *d, struct 
> device_attribute *a,
>   return count;
>  }
>  
> -static SENSOR_DEVICE_ATTR(pwm1_min, S_IRUGO | S_IWUSR,
> +static SENSOR_DEVICE_ATTR(pwm1_min, 0644,
> nouveau_hwmon_get_pwm1_min,
> nouveau_hwmon_set_pwm1_min, 0);
>  
> @@ -295,7 +295,7 @@ nouveau_hwmon_set_pwm1_max(struct device *d, struct 
> device_attribute *a,
>   return count;
>  }
>  
> -static SENSOR_DEVICE_ATTR(pwm1_max, S_IRUGO | S_IWUSR,
> +static SENSOR_DEVICE_ATTR(pwm1_max, 0644,
> nouveau_hwmon_get_pwm1_max,
> nouveau_hwmon_set_pwm1_max, 0);
>  
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH v5 4/5] nouveau_hwmon: Add support for auto_point attributes

2017-05-01 Thread Martin Peres
The name of the patch is misleading since you also add support for
pwm1_min/max.

Could you add change the title to "nouveau/hwmon: expose the auto_point
and pwm_min/max attrs"? And while you are at it, please change every
commit title to nouveau/hwmon instead of nouveau_hwmon.

On 26/04/17 19:46, Oscar Salvador wrote:
> This patch creates a special group attributes for attrs like "*auto_point*".
> We check if we have support for them, and if we do, we gather them all in
> an attribute_group's structure which is the parameter regarding special groups
> of hwmon_device_register_with_info.
> 
> Signed-off-by: Oscar Salvador <osalvador.vilard...@gmail.com>
> ---
>  drivers/gpu/drm/nouveau/nouveau_hwmon.c | 29 -
>  1 file changed, 28 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_hwmon.c 
> b/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> index 4db65fb..9142779 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_hwmon.c
> @@ -358,6 +358,23 @@ nouveau_hwmon_get_power1_crit(struct nouveau_drm *drm)
>   return iccsense->power_w_crit;
>  }
>  
> +static struct attribute *pwm_fan_sensor_attrs[] = {
> + _dev_attr_pwm1_min.dev_attr.attr,
> + _dev_attr_pwm1_max.dev_attr.attr,
> + NULL
> +};
> +ATTRIBUTE_GROUPS(pwm_fan_sensor);
> +
> +static struct attribute *temp1_auto_point_sensor_attrs[] = {
> + _dev_attr_temp1_auto_point1_pwm.dev_attr.attr,
> + _dev_attr_temp1_auto_point1_temp.dev_attr.attr,
> + _dev_attr_temp1_auto_point1_temp_hyst.dev_attr.attr,
> + NULL
> +};
> +ATTRIBUTE_GROUPS(temp1_auto_point_sensor);
> +
> +#define N_ATTR_GROUPS   3
> +
>  static const u32 nouveau_config_chip[] = {
>   HWMON_C_UPDATE_INTERVAL,
>   0
> @@ -792,17 +809,27 @@ nouveau_hwmon_init(struct drm_device *dev)
>  #if defined(CONFIG_HWMON) || (defined(MODULE) && 
> defined(CONFIG_HWMON_MODULE))
>   struct nouveau_drm *drm = nouveau_drm(dev);
>   struct nvkm_therm *therm = nvxx_therm(>client.device);
> + const struct attribute_group *special_groups[N_ATTR_GROUPS];
>   struct nouveau_hwmon *hwmon;
>   struct device *hwmon_dev;
>   int ret = 0;
> + int i = 0;
>  
>   hwmon = drm->hwmon = kzalloc(sizeof(*hwmon), GFP_KERNEL);
>   if (!hwmon)
>   return -ENOMEM;
>   hwmon->dev = dev;
>  
> + if (therm && therm->attr_get && therm->attr_set) {
> + if (nvkm_therm_temp_get(therm) >= 0)
> + special_groups[i++] = _auto_point_sensor_group;
> + if (therm->fan_get && therm->fan_get(therm) >= 0)
> + special_groups[i++] = _fan_sensor_group;
> + }
> +
> + special_groups[i] = 0;
>   hwmon_dev = hwmon_device_register_with_info(dev->dev, "nouveau", dev,
> - _chip_info, NULL);
> + _chip_info, special_groups);

Please align _chip_info with dev->dev and split special_groups
on the following line.

>   if (IS_ERR(hwmon_dev)) {
>   ret = PTR_ERR(hwmon_dev);
>   NV_ERROR(drm, "Unable to register hwmon device: %d\n", ret);
> 

This commit breaks bisectability, so Ben may have to squash it in the
previous one. Otherwise, this looks good to me:

Reviewed-by: Martin Peres <martin.pe...@free.fr>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH v5 3/5] nouveau_hwmon: Remove old code, add .write/.read operations

2017-05-01 Thread Martin Peres
read_string(struct device *dev, enum 
> hwmon_sensor_types type, u32 attr,
>   return -EOPNOTSUPP;
>  }
>  
> +static int
> +nouveau_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> + int channel, long *val)
> +{
> + switch (type) {
> + case hwmon_chip:
> + return nouveau_chip_read(dev, attr, channel, val);
> + case hwmon_temp:
> + return nouveau_temp_read(dev, attr, channel, val);
> + case hwmon_fan:
> + return nouveau_fan_read(dev, attr, channel, val);
> + case hwmon_in:
> + return nouveau_in_read(dev, attr, channel, val);
> + case hwmon_pwm:
> + return nouveau_pwm_read(dev, attr, channel, val);
> + case hwmon_power:
> + return nouveau_power_read(dev, attr, channel, val);
> + default:
> + return -EOPNOTSUPP;
> + }
> +}
> +
> +static int
> +nouveau_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> + int channel, long val)
> +{
> + switch (type) {
> + case hwmon_pwm:
> + return nouveau_pwm_write(dev, attr, channel, val);

Where did all the temperature-related writes go?

Some vbios have fucked up thermal values set by default, this is why we
allow users to override them through hwmon.

Could you expose _max, _max_hyst, _crit and _crit_hyst?

> + default:
> + return -EOPNOTSUPP;
> + }
> +}
> +
>  static const struct hwmon_ops nouveau_hwmon_ops = {
>   .is_visible = nouveau_is_visible,
> - .read = NULL,
> + .read = nouveau_read,
>   .read_string = nouveau_read_string,
> - .write = NULL,
> + .write = nouveau_write,
>  };
>  
>  static const struct hwmon_chip_info nouveau_chip_info = {
> @@ -942,8 +792,6 @@ nouveau_hwmon_init(struct drm_device *dev)
>  #if defined(CONFIG_HWMON) || (defined(MODULE) && 
> defined(CONFIG_HWMON_MODULE))
>   struct nouveau_drm *drm = nouveau_drm(dev);
>   struct nvkm_therm *therm = nvxx_therm(>client.device);
> - struct nvkm_volt *volt = nvxx_volt(>client.device);
> - struct nvkm_iccsense *iccsense = nvxx_iccsense(>client.device);
>   struct nouveau_hwmon *hwmon;
>   struct device *hwmon_dev;
>   int ret = 0;
> @@ -953,79 +801,16 @@ nouveau_hwmon_init(struct drm_device *dev)
>   return -ENOMEM;
>   hwmon->dev = dev;
>  
> - hwmon_dev = hwmon_device_register(dev->dev);
> + hwmon_dev = hwmon_device_register_with_info(dev->dev, "nouveau", dev,
> + _chip_info, NULL);
>   if (IS_ERR(hwmon_dev)) {
>   ret = PTR_ERR(hwmon_dev);
>   NV_ERROR(drm, "Unable to register hwmon device: %d\n", ret);
>   return ret;
>   }
> - dev_set_drvdata(hwmon_dev, dev);
> -
> - /* set the default attributes */
> - ret = sysfs_create_group(_dev->kobj, _default_attrgroup);
> - if (ret)
> - goto error;
> -
> - if (therm && therm->attr_get && therm->attr_set) {
> - /* if the card has a working thermal sensor */
> - if (nvkm_therm_temp_get(therm) >= 0) {
> - ret = sysfs_create_group(_dev->kobj, 
> _temp_attrgroup);
> - if (ret)
> -     goto error;
> - }
> -
> - /* if the card has a pwm fan */
> - /*XXX: incorrect, need better detection for this, some boards 
> have
> -  * the gpio entries for pwm fan control even when there's no
> -  * actual fan connected to it... therm table? */
> - if (therm->fan_get && therm->fan_get(therm) >= 0) {
> - ret = sysfs_create_group(_dev->kobj,
> -  _pwm_fan_attrgroup);
> - if (ret)
> - goto error;
> - }
> - }
> -
> - /* if the card can read the fan rpm */
> - if (therm && nvkm_therm_fan_sense(therm) >= 0) {
> - ret = sysfs_create_group(_dev->kobj,
> -  _fan_rpm_attrgroup);
> - if (ret)
> - goto error;
> - }
> -
> - if (volt && nvkm_volt_get(volt) >= 0) {
> - ret = sysfs_create_group(_dev->kobj,
> -  _in0_attrgroup);
> -
> - if (ret)
> - goto error;
> - }
> -
> - if (iccsense && iccsense->data_valid && !list_empty(>rails)) {
> - ret = sysfs_create_group(_dev->kobj,
> -  _power_attrgroup);
> -
> - if (ret)
> - goto error;
> -
> - if (iccsense->power_w_max && iccsense->power_w_crit) {
> - ret = sysfs_create_group(_dev->kobj,
> -  _power_caps_attrgroup);
> - if (ret)
> - goto error;
> - }
> - }
>  
>   hwmon->hwmon = hwmon_dev;
> -
>   return 0;
> -
> -error:
> - NV_ERROR(drm, "Unable to create some hwmon sysfs files: %d\n", ret);
> - hwmon_device_unregister(hwmon_dev);
> - hwmon->hwmon = NULL;
> - return ret;
>  #else
>   return 0;
>  #endif
> @@ -1037,17 +822,8 @@ nouveau_hwmon_fini(struct drm_device *dev)
>  #if defined(CONFIG_HWMON) || (defined(MODULE) && 
> defined(CONFIG_HWMON_MODULE))
>   struct nouveau_hwmon *hwmon = nouveau_hwmon(dev);
>  
> - if (hwmon->hwmon) {
> - sysfs_remove_group(>hwmon->kobj, 
> _default_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, _temp_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, 
> _pwm_fan_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, 
> _fan_rpm_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, _in0_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, _power_attrgroup);
> - sysfs_remove_group(>hwmon->kobj, 
> _power_caps_attrgroup);
> -
> + if (hwmon->hwmon)
>   hwmon_device_unregister(hwmon->hwmon);
> - }
>  
>   nouveau_drm(dev)->hwmon = NULL;
>   kfree(hwmon);
> 

Thanks a lot, this patch makes a huge improvement in readability! With
the comments addressed, this patch is:

Reviewed-by: Martin Peres <martin.pe...@free.fr>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH v5 2/5] nouveau_hwmon: Add nouveau_hwmon_ops structure with .is_visible/.read_string

2017-05-01 Thread Martin Peres
gt; +
> + if (!therm || !therm->attr_get || nvkm_therm_fan_sense(therm) < 0)
> + return 0;
> +
> + switch (attr) {
> + case hwmon_fan_input:
> + return 0444;
> + default:
> + return 0;
> + }
> +}
> +
> +static umode_t
> +nouveau_is_visible(const void *data, enum hwmon_sensor_types type, u32 attr,
> + int channel)

Could you align 'int channel' with 'const void *data'?

> +{
> + switch (type) {
> + case hwmon_chip:
> + return nouveau_chip_is_visible(data, attr, channel);
> + case hwmon_temp:
> + return nouveau_temp_is_visible(data, attr, channel);
> + case hwmon_fan:
> + return nouveau_fan_is_visible(data, attr, channel);
> + case hwmon_in:
> + return nouveau_input_is_visible(data, attr, channel);
> + case hwmon_pwm:
> + return nouveau_pwm_is_visible(data, attr, channel);
> + case hwmon_power:
> + return nouveau_power_is_visible(data, attr, channel);
> + default:
> + return 0;
> + }
> +}
> +
> +static const char input_label[] = "GPU core";
> +
> +static int
> +nouveau_read_string(struct device *dev, enum hwmon_sensor_types type, u32 
> attr,
> + int channel, char **buf)

Same as above.

> +{
> + if (type == hwmon_in && attr == hwmon_in_label) {
> + *buf = input_label;
> + return 0;
> + }
> +
> + return -EOPNOTSUPP;
> +}
> +
> +static const struct hwmon_ops nouveau_hwmon_ops = {
> + .is_visible = nouveau_is_visible,
> + .read = NULL,
> + .read_string = nouveau_read_string,
> + .write = NULL,
> +};
> +
> +static const struct hwmon_chip_info nouveau_chip_info = {
> + .ops = _hwmon_ops,
> + .info = nouveau_info,
> +};
>  #endif
>  
>  int
>

With the power-related conditions and all the alignments fixed, patches
1-2 are:

Reviewed-by: Martin Peres <martin.pe...@free.fr>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH] drm: Document code of conduct

2017-04-11 Thread Martin Peres

On 11/04/17 10:51, Archit Taneja wrote:



On 04/11/2017 01:03 PM, Sumit Semwal wrote:

On 11 April 2017 at 12:38, Daniel Stone <dan...@fooishbar.org> wrote:

Hi,

On 11 April 2017 at 07:48, Daniel Vetter <daniel.vet...@ffwll.ch> wrote:

Note: As Daniel Stone mentioned in the announcement fd.o admins
started chatting with the communities their hosting, which includs the
X.org foundation board, to figure out how to fan out enforcement and
allow projects to run things on their own (with fd.o still as the
fallback).  So the details of enforcement (and appealing decisions)
might still change, but since this involves the board and lots more
people it'll take a while to get there. For now this is good enough I
think.


All true.

Reviewed-by: Daniel Stone <dani...@collabora.com>


Thanks for this, Daniel!

Reviewed-by: Sumit Semwal <sumit.sem...@linaro.org>


Acked-by: Archit Taneja <arch...@codeaurora.org>


Thanks for doing this, this was long overdue!

Reviewed-by: Martin Peres <martin.pe...@free.fr>

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: GSOC(Xorg)

2017-04-04 Thread Martin Peres

On 04/04/17 01:17, Christian Lockley wrote:

Hi all, I'm Christian Lockley
I'm wondering if it's too late to apply for GSOC if so I understand and
will apply next year. It not, is the DRM kernel janitor position still
open? While I don't have kernel or graphics experience I do know C and
Java. If this position is available I'd like to work on the De-midlayer
drivers and/or Switch from reference/unreference to get/put projects.

Linked below is a git repo of a personal
project: https://github.com/clockley/watchdogd/


Hey Christian,

It may be too late for the GSoC, but the EVoC is still an option for you :)

Here are the application information: https://www.x.org/wiki/XorgEVoC/

Cheers,
Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-03-27 Thread Martin Peres

On 26/01/17 14:37, Martin Peres wrote:

Despite all the careful planing of the kernel, a link may become
insufficient to handle the currently-set mode. At this point, the
kernel should mark this particular configuration as being broken
and potentially prune the mode before setting the offending connector's
link-status to BAD and send the userspace a hotplug event. This may
happen right after a modeset or later on.

When available, we should use the link-status information to reset
the wanted mode.

Signed-off-by: Martin Peres <martin.pe...@linux.intel.com>


The relevant kernel patches have landed in drm-tip about a month ago.

Eric, would you mind providing feedback on or merging this patch?

Thanks,
Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] Implementing Miracast

2017-03-07 Thread Martin Peres

On 07/03/17 05:00, Daniel Kasak wrote:

Any news on this? I'm also interested :)

Dan


Hmm, good question! I will ping internally and see if we are ready to 
release something as an RFC.


Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: KMS backlight ABI proposition

2017-02-24 Thread Martin Peres

On 24/02/17 12:44, Hans de Goede wrote:

Hi,

On 24-02-17 11:23, Martin Peres wrote:

On 24/02/17 11:59, Hans de Goede wrote:

Hi,

On 24-02-17 10:48, Hans de Goede wrote:

Hi,

On 24-02-17 10:46, Hans de Goede wrote:

Hi,

On 24-02-17 10:34, Jani Nikula wrote:

On Fri, 24 Feb 2017, Hans de Goede <hdego...@redhat.com> wrote:

On 24-02-17 09:43, Jani Nikula wrote:

I don't think we have any good ideas how to solve the property max
changing on the fly. But based on the discussion so far, it's
starting
to look like we'll need to study that more thoroughly.


As I mentioned in another part of the thread, I think we can just
return
-EBUSY if udev tries to change the binding when a drm-node is open
*and* the backlight/brightness property has been accessed (even if
read only) through that drm-node. We can reset the busy flag when
the (last open) drm-node gets closed.


That's certainly easier than coming up with ways to notify the
userspace
about property range changes. The obvious downside is that you
have to
kill X to do the re-association. That's fine for when everything just
works (i.e. everything happens before the drm node is opened), but
for
debugging it's a bit painful. Can we live with that?


Sure, debugging udev rules is somewhat tricky in general (requires
manually triggering udev). When asking users to test udev rule / hwdb
patches I usually just ask them to reboot.


I'm adding the *and* the backlight/brightness property has been
accessed
condition so that udev can still do its thing while a boot splash
screen is active. This assumes that boot splash screens do not
touch the brightness.


I presume udev will have to do its job before the drm node is
opened, I
think relying on the userspace not touching the brightness
property is
going to be racy/flaky.


Making sure that udev does its job before we show the splashscreen
is tricky, note the idea is that the splashscreen *never* touches
the brightness property and by the time regular X / wayland launches
udev should have had plenty of time to complete its job.

I do not know how hard it will be to add the code to detect triggering
the property being touched, but that is the best I can come up with
to still allow the bootsplash (e.g. plymouth) to use the drm-node.


p.s.

I realize this aint pretty, alternatively we could just document that
root may change the back-end of the property and that in that case
the max value may change and that it is up to userspace to make sure
that it deals with this (e.g. by not changing the backend at a bad
time).


Sorry for all the mails, I just realized that this (making it
userspace's
problem entirely) is exactly what the input subsystem does. Things like
setkeycodes, but esp. also the fixing of min/max value for absolute axis
for touchpads are done by /lib/udev/hwdb.d/60-evdev.hwdb and the
unwritten rule there is that udev needs to do this before wayland or
xorg start using the touchpad and queries the min/max values (which it
does once and then caches the results).

I just checked the implementation of the EVIOCSABS ioctl and it does
not check that the device is not in use while the changes are applied.

So I think the best solution here is to just document that changing the
backend and thus the max value while it is being used is not the best
idea, but is allowed by the kernel.


What's wrong with sending a hotplug event upon changing the backlight
driver? This would work even when X is running!


Propagating that is going to be tricky, does randr even allow changing
the range of a property after it has been registered ?


It does, unless the property was set immutable.



With that said, yes sending a change uevent when this changes probably
is a good idea regardless.


I think that solves the problem pretty nicely, so yes :)
Will wait until monday to send an updated email with the new proposal.

Dave, I would really like to hear more about your thoughts on this. Did 
we explain sufficiently well why we think pushing work onto the 
userspace will not work?


Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: KMS backlight ABI proposition

2017-02-24 Thread Martin Peres

On 24/02/17 11:59, Hans de Goede wrote:

Hi,

On 24-02-17 10:48, Hans de Goede wrote:

Hi,

On 24-02-17 10:46, Hans de Goede wrote:

Hi,

On 24-02-17 10:34, Jani Nikula wrote:

On Fri, 24 Feb 2017, Hans de Goede  wrote:

On 24-02-17 09:43, Jani Nikula wrote:

I don't think we have any good ideas how to solve the property max
changing on the fly. But based on the discussion so far, it's
starting
to look like we'll need to study that more thoroughly.


As I mentioned in another part of the thread, I think we can just
return
-EBUSY if udev tries to change the binding when a drm-node is open
*and* the backlight/brightness property has been accessed (even if
read only) through that drm-node. We can reset the busy flag when
the (last open) drm-node gets closed.


That's certainly easier than coming up with ways to notify the
userspace
about property range changes. The obvious downside is that you have to
kill X to do the re-association. That's fine for when everything just
works (i.e. everything happens before the drm node is opened), but for
debugging it's a bit painful. Can we live with that?


Sure, debugging udev rules is somewhat tricky in general (requires
manually triggering udev). When asking users to test udev rule / hwdb
patches I usually just ask them to reboot.


I'm adding the *and* the backlight/brightness property has been
accessed
condition so that udev can still do its thing while a boot splash
screen is active. This assumes that boot splash screens do not
touch the brightness.


I presume udev will have to do its job before the drm node is opened, I
think relying on the userspace not touching the brightness property is
going to be racy/flaky.


Making sure that udev does its job before we show the splashscreen
is tricky, note the idea is that the splashscreen *never* touches
the brightness property and by the time regular X / wayland launches
udev should have had plenty of time to complete its job.

I do not know how hard it will be to add the code to detect triggering
the property being touched, but that is the best I can come up with
to still allow the bootsplash (e.g. plymouth) to use the drm-node.


p.s.

I realize this aint pretty, alternatively we could just document that
root may change the back-end of the property and that in that case
the max value may change and that it is up to userspace to make sure
that it deals with this (e.g. by not changing the backend at a bad
time).


Sorry for all the mails, I just realized that this (making it userspace's
problem entirely) is exactly what the input subsystem does. Things like
setkeycodes, but esp. also the fixing of min/max value for absolute axis
for touchpads are done by /lib/udev/hwdb.d/60-evdev.hwdb and the
unwritten rule there is that udev needs to do this before wayland or
xorg start using the touchpad and queries the min/max values (which it
does once and then caches the results).

I just checked the implementation of the EVIOCSABS ioctl and it does
not check that the device is not in use while the changes are applied.

So I think the best solution here is to just document that changing the
backend and thus the max value while it is being used is not the best
idea, but is allowed by the kernel.


What's wrong with sending a hotplug event upon changing the backlight 
driver? This would work even when X is running!


Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: KMS backlight ABI proposition

2017-02-22 Thread Martin Peres

On 22/02/17 17:05, Jani Nikula wrote:

On Mon, 20 Feb 2017, Daniel Thompson  wrote:

=== 1) Backlight device interoperability ===

Since we need to keep backward compatibility of the backlight, we have
to keep the current backlight drivers.

Here are possible options:

  - Exclusive access: Unregister a backlight device when the drm
brightness property is requested/used;
  - Unidirectional access: When writing to the backlight property, update
the backlight device;
  - Bi-directional access: Propagate back changes from the backlight
device to the property's value.

Being bi-directional would be of course the best, but this requires that
both drivers have the same number of steps, otherwise, we may write a
value to the property, but get another one when reading it right after,
due to the non-bijective nature of the transformation.

I don't accept that bi-directional transfer requires the step range to
be the same. Isn't all that is required is acceptance that both sides
maintain a copy of the current value in their own number range and that
if X is written to then Y may change value (i.e. when mapping between
0..100 and 0..10 then if 0..100 is at 11 and 0..10 gets 1 written then
0..100 is allowed to change to 10).

I'd note also that the mechanisms inside backlight to support
sysfs_notify would mean *implementing* bi-directional comms isn't too
bloated even if the two sides used different number ranges.

I question the need and usefulness of bi-directional access, and I
question it being "the best". The end goal is to use the connector
property exclusively, and deprecate the sysfs API. If you choose to use
the connector property, you should steer clear from the sysfs. That's
part of the deal here.

The sysfs will still work as ever, it won't break or regress or go away
anytime soon, but the ABI and contract for the connector property will
be, "if you touch the sysfs while using the connector property, you
might get unexpected results reading back the property".

There *are* going to be subtle bugs with the simultaneous operation, and
I know I don't want to be in the receiving end of those bugs.

Raise your hands, who wants to deal with them? Who thinks it's worth it?


The current ABI proposal has mostly been proposed by Jani Nikula, as a
result of his experience and our discussions.

It takes the following approach:

  - Fixed number of steps (I think we should change it to expose the same
number of steps)

Fixing a large number of steps over an inflexible (lets say 8 level)
backlight device creates a new problem. User actions to
increase/decrease the backlight don't work unless the userspace knows
the hardware step size...

Many of the ACPI backlight interfaces have a limited number of steps,
such as 8 or 16.

However, at least for i915 native backlight, we might *theoretically*
have, say, 5000 steps. But we might have no clue how many user
perceivable distinct steps there are.


The 0..100 proposal below will encourage the userspace to implement
hotkeys that jump by 9 (because 0 is reserved with a special meaning).
and thus there will be deadspots where the hot key has no effect.

One brainstormed idea was to provide a way to increase/decrease the
brightness by a user perceivable margin or N%, whichever is the bigger
change. I don't think we explored that in depth, or how feasible that is
with the properties. It might not solve everything, but it could solve
one class of problems with expanding a limited hardware range to 0..100.


  - Uni-directional: KMS -> backlight

See above.



  - Do not deal yet with 3) and 4): I have ideas, but I have been
procrastinating long-enough to send this email and we already have much
to discuss!

Do any of those ideas involve adding *new* API to provide information to
userspace to help it correct the curves (e.g. somewhat like ALSA)?

It's not that I object to such an approach but I consider it pointless
to present fixed range brightness levels if the userspace were to end up
responsible for curve correction.

One of the ideas we've discussed is having a property to adjust the
curve in kernel. If the driver knows parameters of the backlight, it
could populate the curve with the information it has, but it would allow
the userspace to adjust or replace it. The idea is that the userspace
could then treat the brightness property as linear wrt perceived
brightness. ("Perceived brightness" is kind of vague too, but let's not
go there just yet.)


One way would be to just expose a table containing trip points with
known (raw_value, luminance %) points. The userspace may just
allow changing between them or may want to interpolate if it needs
more trip points. This way, we don't have to deal with any curves in
the kernel space.

If this table were user-writable, this would allow people to calibrate
their monitor, as wanted (using a colourhug, or whatever).




  - Does not expose the current backlight power as we want to let the
kernel deal with 

Re: KMS backlight ABI proposition

2017-02-22 Thread Martin Peres

On 20/02/17 21:57, Hans de Goede wrote:

Hi,

On 20-02-17 20:27, Dave Airlie wrote:
On 17 February 2017 at 22:58, Martin Peres 
<martin.pe...@linux.intel.com> wrote:

Hey everyone,

We have been working towards exposing the backlight as a KMS property
instead of relying on the backlight drivers. We have CC:ed the 
people we

have found to be the more likely to be interested in the discussion but
please add everyone you think would have some experience with this 
issue.


== Introduction ==

We are trying to bring the same level of support for the backlight 
on both

the xf86-video-intel and -modesetting DDX.

Looking into the situation of the backlight, we identified these 
problems
which are almost show-stoppers for -modesetting and wayland 
compositors:


 - There is no mapping between the backlight driver and 
DRM-connectors. This
means that, in case there are multiple backlight drivers, the 
userspace has
to have knowledge of the machine to know which driver should be 
used. See

the priority list for the intel driver [0].

 - The luminance curve of the backlight drivers is not specified, 
which can

lead to a bad user experience: Little changes in the highest levels but
drastic changes in the low levels.

 - Writing to the backlight driver still requires root rights. Given 
that
the xserver and wayland compositors are now running root-less, this 
means we

would need a complex dance involving a setuid helper [1].

Hans de Goede has already given a presentation about these issues at
XDC2014. The slides are a good read[2].

[0]
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n259 



[1]
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n348 



[2]
https://www.x.org/wiki/Events/XDC2014/XDC2014GoedeBacklight/backlight.pdf 



== Proposal ==

Since David Hermann already worked on this and proposed what I consider
being greats foundations for building towards a solution addressing the
issues above, I will just ask you to read his original words:

https://lists.freedesktop.org/archives/dri-devel/2014-September/067984.html 



Jumping in in the middle of the thread here, I'll reply to the 
top-level post

later.

You might want to read the rest of that thread, the response posted 
by Matthew


This is really messy and we made a decision to put the policy to pick
which backlight
driver into userspace because it's not something the kernel can figure
out properly.


That is actually not really true, the only policy which in practice lives
in userspace is picking firmware type backlight interfaces over 
platform /

raw. Everything, really EVERYTHING else is already handled in the kernel,
using quirk tables in various places. David Herrmann's proposal actually
might improve upon this by having a default binding behavior in the 
kernel

and then allow fixing up the binding between backlight interface and
drm connector through udev rules triggered by hwdb entries, which means
(some (*)) new quirks could be handled in userspace.


I fully agree with this. Pushing everything to the userspace sounds like 
quite a

risk and adds complexity. This is not something the -modesetting driver and
wayland compositors will not deal in a uniform way...

Also, this means we need to expose a lot of information to the 
userspace, that

means new interfaces (and new ABI).

By trying to handle all the common cases in the kernel space and letting the
userspace override the decision through udev or by hand, it should 
result in a

much-better experience for everyone (no need to recompile your ddx driver
anymore).

Another added benefit of exposing backlight through KMS is that we do not
have to deal with two access control mechanisms, which should help the
userspace.



Things might have changed now a bit with Win10 not using ACPI 
backlight controls

if memory serves, but it's still an unholy mess.

I think MBPs can expose 3 backlights (ACPI, gmux and driver) and only
the gmux one does anything.


Correct which is why we have this in the kernel:

drivers/gpu/drm/nouveau/nouveau_backlight.c :


if (apple_gmux_present()) {
NV_INFO(drm, "Apple GMUX detected: not registering 
Nouveau backlight interface\n");

return 0;
}


How do you tackle that end of the problem, how does the i915/drm core
know when the
driver for the one backlight it needs has appeared, and is working, by
deferring this to
userspace we let the system load all the drivers and the policy of
picking the correct one
is left there.


Except in practice it is not, each desktop-environment and then some
other bits and pieces all have their own code to pick which backlight
interface to use. Which luckily all are as simple as prefer firmware
over platform/raw, since doing something more smart would be a big
mess as everything has its own code for handling this.

And to work around this we end up with things like hiding the
acpi_video interface on win

Re: KMS backlight ABI proposition

2017-02-20 Thread Martin Peres

+plasma-devel, as suggested by Martin Gräßlin.


On 17/02/17 14:58, Martin Peres wrote:

Hey everyone,

We have been working towards exposing the backlight as a KMS property
instead of relying on the backlight drivers. We have CC:ed the people we
have found to be the more likely to be interested in the discussion but
please add everyone you think would have some experience with this issue.

== Introduction ==

We are trying to bring the same level of support for the backlight on
both the xf86-video-intel and -modesetting DDX.

Looking into the situation of the backlight, we identified these
problems which are almost show-stoppers for -modesetting and wayland
compositors:

 - There is no mapping between the backlight driver and DRM-connectors.
This means that, in case there are multiple backlight drivers, the
userspace has to have knowledge of the machine to know which driver
should be used. See the priority list for the intel driver [0].

 - The luminance curve of the backlight drivers is not specified, which
can lead to a bad user experience: Little changes in the highest levels
but drastic changes in the low levels.

 - Writing to the backlight driver still requires root rights. Given
that the xserver and wayland compositors are now running root-less, this
means we would need a complex dance involving a setuid helper [1].

Hans de Goede has already given a presentation about these issues at
XDC2014. The slides are a good read[2].

[0]
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n259


[1]
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n348


[2]
https://www.x.org/wiki/Events/XDC2014/XDC2014GoedeBacklight/backlight.pdf

== Proposal ==

Since David Hermann already worked on this and proposed what I consider
being greats foundations for building towards a solution addressing the
issues above, I will just ask you to read his original words:

https://lists.freedesktop.org/archives/dri-devel/2014-September/067984.html

== Open issues ==

Here are the open issues we have identified with the solution proposed
by David:

  1) Backlight device interoperability: How far should we support
 mixing the backlight device and brightness property? Should it be
 unidirectional or bi-directional? What about the start-up value
 exposed by the brightness property?

  2) How many steps should be exposed: fixed or driver-dependent?

  3) Expected output curve: power? luminance? Simply monotonically
 increasing?

  4) Should the userspace be able to turn off the backlight? If so, how
 should it do it? What can we do to let the userspace distinguish
 between backlight off or on?

  5) Should we expose to the userspace what is the current backlight
 power?

Here is our current point of view on the matter:

=== 1) Backlight device interoperability ===

Since we need to keep backward compatibility of the backlight, we have
to keep the current backlight drivers.

Here are possible options:

 - Exclusive access: Unregister a backlight device when the drm
brightness property is requested/used;
 - Unidirectional access: When writing to the backlight property, update
the backlight device;
 - Bi-directional access: Propagate back changes from the backlight
device to the property's value.

Being bi-directional would be of course the best, but this requires that
both drivers have the same number of steps, otherwise, we may write a
value to the property, but get another one when reading it right after,
due to the non-bijective nature of the transformation.

Uni-directional would work in all cases, with the caveat that mixing
calls to the KMS property and the backlight device will not be supported
(changes mades through the sysfs interface of the backlight driver will
not be reflected in the KMS property). At boot time, we should however
initialize the value of the backlight property with a value close to
what is currently set in the backlight driver.

Giving exclusive access does not sound very good to me, as it would be
hard for the userspace to deal with disappearing drivers...

=== 2) How many steps should be exposed ===

If the KMS property exposes the same number of steps as the backlight
driver, it allows us to get a bijective function between the two
interfaces, and allow a bi-directional communication. The downside of
this is that it forces the userspace to deal with a variable number of
steps which can range from 4 to 1k+. Also, the userspace would be able
to handle the case where there are less steps than it would like to expose.

If the KMS property exposes a fixed number of steps (say 100), it
becomes easy for the userspace to express the wanted brightness.
However, on drivers exposing less than these 100 steps, we cannot
guarantee that any change in the value will produce any change. If there
is only one possible value (on or off), the user may be trying the
change the brightness, a GUI would show what is the expected backlight
state

KMS backlight ABI proposition

2017-02-17 Thread Martin Peres

Hey everyone,

We have been working towards exposing the backlight as a KMS property 
instead of relying on the backlight drivers. We have CC:ed the people we 
have found to be the more likely to be interested in the discussion but 
please add everyone you think would have some experience with this issue.


== Introduction ==

We are trying to bring the same level of support for the backlight on 
both the xf86-video-intel and -modesetting DDX.


Looking into the situation of the backlight, we identified these 
problems which are almost show-stoppers for -modesetting and wayland 
compositors:


 - There is no mapping between the backlight driver and DRM-connectors. 
This means that, in case there are multiple backlight drivers, the 
userspace has to have knowledge of the machine to know which driver 
should be used. See the priority list for the intel driver [0].


 - The luminance curve of the backlight drivers is not specified, which 
can lead to a bad user experience: Little changes in the highest levels 
but drastic changes in the low levels.


 - Writing to the backlight driver still requires root rights. Given 
that the xserver and wayland compositors are now running root-less, this 
means we would need a complex dance involving a setuid helper [1].


Hans de Goede has already given a presentation about these issues at 
XDC2014. The slides are a good read[2].


[0] 
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n259


[1] 
https://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/src/backlight.c#n348


[2] 
https://www.x.org/wiki/Events/XDC2014/XDC2014GoedeBacklight/backlight.pdf


== Proposal ==

Since David Hermann already worked on this and proposed what I consider 
being greats foundations for building towards a solution addressing the 
issues above, I will just ask you to read his original words:


https://lists.freedesktop.org/archives/dri-devel/2014-September/067984.html

== Open issues ==

Here are the open issues we have identified with the solution proposed 
by David:


  1) Backlight device interoperability: How far should we support
 mixing the backlight device and brightness property? Should it be
 unidirectional or bi-directional? What about the start-up value
 exposed by the brightness property?

  2) How many steps should be exposed: fixed or driver-dependent?

  3) Expected output curve: power? luminance? Simply monotonically
 increasing?

  4) Should the userspace be able to turn off the backlight? If so, how
 should it do it? What can we do to let the userspace distinguish
 between backlight off or on?

  5) Should we expose to the userspace what is the current backlight
 power?

Here is our current point of view on the matter:

=== 1) Backlight device interoperability ===

Since we need to keep backward compatibility of the backlight, we have 
to keep the current backlight drivers.


Here are possible options:

 - Exclusive access: Unregister a backlight device when the drm 
brightness property is requested/used;
 - Unidirectional access: When writing to the backlight property, 
update the backlight device;
 - Bi-directional access: Propagate back changes from the backlight 
device to the property's value.


Being bi-directional would be of course the best, but this requires that 
both drivers have the same number of steps, otherwise, we may write a 
value to the property, but get another one when reading it right after, 
due to the non-bijective nature of the transformation.


Uni-directional would work in all cases, with the caveat that mixing 
calls to the KMS property and the backlight device will not be supported 
(changes mades through the sysfs interface of the backlight driver will 
not be reflected in the KMS property). At boot time, we should however 
initialize the value of the backlight property with a value close to 
what is currently set in the backlight driver.


Giving exclusive access does not sound very good to me, as it would be 
hard for the userspace to deal with disappearing drivers...


=== 2) How many steps should be exposed ===

If the KMS property exposes the same number of steps as the backlight 
driver, it allows us to get a bijective function between the two 
interfaces, and allow a bi-directional communication. The downside of 
this is that it forces the userspace to deal with a variable number of 
steps which can range from 4 to 1k+. Also, the userspace would be able 
to handle the case where there are less steps than it would like to expose.


If the KMS property exposes a fixed number of steps (say 100), it 
becomes easy for the userspace to express the wanted brightness. 
However, on drivers exposing less than these 100 steps, we cannot 
guarantee that any change in the value will produce any change. If there 
is only one possible value (on or off), the user may be trying the 
change the brightness, a GUI would show what is the expected backlight 
state, but no change in the 

Re: [RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-02-15 Thread Martin Peres

On 13/02/17 23:05, Eric Anholt wrote:

I was just trying to provide review to get the kernel unstuck.  The
kernel should not be blocked until the patch gets lands (this obviously
isn't the case with ioctls, which *don't* land in userspace until kernel
does), you just need userspace published and generally agreed that the
ABI works.



Yeah, I found it a bit odd too that we would get such requirement.

Anyway, thanks for taking the time, we will take this as an ACK from 
your side, which should be enough to merge the patch in the kernel.


Thanks!
Martin
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-02-08 Thread Martin Peres

On 06/02/17 17:50, Martin Peres wrote:

On 03/02/17 10:04, Daniel Vetter wrote:

On Fri, Feb 03, 2017 at 01:30:14AM +0200, Martin Peres wrote:

On 01/02/17 22:05, Manasi Navare wrote:

On Wed, Feb 01, 2017 at 11:58:16AM -0800, Eric Anholt wrote:

Jani Nikula <jani.nik...@linux.intel.com> writes:


On Tue, 31 Jan 2017, Eric Anholt <e...@anholt.net> wrote:

Martin Peres <martin.pe...@linux.intel.com> writes:


Despite all the careful planing of the kernel, a link may become
insufficient to handle the currently-set mode. At this point, the
kernel should mark this particular configuration as being broken
and potentially prune the mode before setting the offending
connector's
link-status to BAD and send the userspace a hotplug event. This may
happen right after a modeset or later on.

When available, we should use the link-status information to reset
the wanted mode.

Signed-off-by: Martin Peres <martin.pe...@linux.intel.com>

If I understand this right, there are two failure modes being
handled:

1) A mode that won't actually work because the link isn't good
enough.

2) A mode that should work, but link parameters were too
optimistic and
if we just ask the kernel to set the mode again it'll use more
conservative parameters that work.

This patch seems good for 2).  For 1), the drmmode_set_mode_major is
going to set our old mode back.  Won't the modeset then fail to link
train again, and bring us back into this loop?  The only escape
that I
see would be some other userspace responding to the advertised
mode list
changing, and then asking X to modeset to something new.

To avoid that failure busy loop, should we re-fetching modes at this
point, and only re-setting if our mode still exists?

Disclaimer: I don't know anything about the internals of the
modesetting
driver.

Perhaps we can identify the two cases now, but I'd put this more
generally: if the link status has gone bad, it's an indicator to
userspace that the circumstances may have changed, and userspace
should
query the kernel for the list of available modes again. It should no
longer trust information obtained prior to getting the bad link
status,
including the current mode.

But specifically, I think you're right, and AFAICT asking for the
list
of modes again is the only way for the userspace to distinguish
between
the two cases. I don't think there's a shortcut for deciding the
current
mode is still valid.

To avoid the busy-loop problem, I think I'd like this patch to
re-query
the kernel to ask about the current mode list, and only try to re-set
the mode if our mode is still there.

If the mode isn't there, then it's up to the DE to take action in
response to the notification of new modes.  If you don't have a DE to
take appropriate action, you're kind of out of luck.

As far as the ABI goes, this seems fine to me.  The only concern I had
about ABI was having to walk all the connectors on every uevent to see
if any had gone bad -- couldn't we have a flag of some sort about what
the uevent indicates?  But uevents should be super rare, so I'd say
the
kernel could go ahead with the current plan.

Yes I agree. The kernel sets the link status BAD as soona s link
training fails
but does not prune the modes until a new modelist is requested by
the userspace.
So this patch should re query the mode list as soon as it sees the
link status
BAD in order for the kernel to validate the modes again based on new
link
paarmeters and send a new mode list back.

Seems like a bad behaviour from the kernel, isn't it? It should return
immediatly
if the mode is gonna be pruned :s

The mode list pruning isn't relevant for modeesets, the kernel doesn't
validate requested modes against that. The mode list is purely for
userspace's information. Which means updating it only when userspace asks
for it is fine.


Hmm, ok. Fair enough!


I also thought some more about the loop when we're stuck on BAD, and I
think it shouldn't happen: When the link goes BAD we update the link
parameter values to the new limits, and the kernel will reject any mode
that won't fit anymore. Of course you might be unlucky, and the new link
limits are also not working, but eventually you're stuck at the "you're
link is broken, sry" stage, where the kernel rejects everything :-)

So I think the busy-loop has strictly a limited amount of time until it
runs out of steam.


OK, I have given it more thoughts and discussed with Ville and Chris IRL or
on IRC.

My current proposal is based on the idea that the kernel should reject a
mode
it knows it cannot set. This is already largely tested in real life: Try
setting 4K@60Hz on an HDMI 1.x monitor, it should immediately fail on
prepare(). For this proposal to work, we would need to put in the ABI
that a
driver that sets the link-status to BAD should also make sure that whatever
the userspace does, no infinite loop should be created (by changing the
maximum link characteristics before setting the link-status proper

Re: [RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-02-06 Thread Martin Peres

On 03/02/17 10:04, Daniel Vetter wrote:

On Fri, Feb 03, 2017 at 01:30:14AM +0200, Martin Peres wrote:

On 01/02/17 22:05, Manasi Navare wrote:

On Wed, Feb 01, 2017 at 11:58:16AM -0800, Eric Anholt wrote:

Jani Nikula <jani.nik...@linux.intel.com> writes:


On Tue, 31 Jan 2017, Eric Anholt <e...@anholt.net> wrote:

Martin Peres <martin.pe...@linux.intel.com> writes:


Despite all the careful planing of the kernel, a link may become
insufficient to handle the currently-set mode. At this point, the
kernel should mark this particular configuration as being broken
and potentially prune the mode before setting the offending connector's
link-status to BAD and send the userspace a hotplug event. This may
happen right after a modeset or later on.

When available, we should use the link-status information to reset
the wanted mode.

Signed-off-by: Martin Peres <martin.pe...@linux.intel.com>

If I understand this right, there are two failure modes being handled:

1) A mode that won't actually work because the link isn't good enough.

2) A mode that should work, but link parameters were too optimistic and
if we just ask the kernel to set the mode again it'll use more
conservative parameters that work.

This patch seems good for 2).  For 1), the drmmode_set_mode_major is
going to set our old mode back.  Won't the modeset then fail to link
train again, and bring us back into this loop?  The only escape that I
see would be some other userspace responding to the advertised mode list
changing, and then asking X to modeset to something new.

To avoid that failure busy loop, should we re-fetching modes at this
point, and only re-setting if our mode still exists?

Disclaimer: I don't know anything about the internals of the modesetting
driver.

Perhaps we can identify the two cases now, but I'd put this more
generally: if the link status has gone bad, it's an indicator to
userspace that the circumstances may have changed, and userspace should
query the kernel for the list of available modes again. It should no
longer trust information obtained prior to getting the bad link status,
including the current mode.

But specifically, I think you're right, and AFAICT asking for the list
of modes again is the only way for the userspace to distinguish between
the two cases. I don't think there's a shortcut for deciding the current
mode is still valid.

To avoid the busy-loop problem, I think I'd like this patch to re-query
the kernel to ask about the current mode list, and only try to re-set
the mode if our mode is still there.

If the mode isn't there, then it's up to the DE to take action in
response to the notification of new modes.  If you don't have a DE to
take appropriate action, you're kind of out of luck.

As far as the ABI goes, this seems fine to me.  The only concern I had
about ABI was having to walk all the connectors on every uevent to see
if any had gone bad -- couldn't we have a flag of some sort about what
the uevent indicates?  But uevents should be super rare, so I'd say the
kernel could go ahead with the current plan.

Yes I agree. The kernel sets the link status BAD as soona s link training fails
but does not prune the modes until a new modelist is requested by the userspace.
So this patch should re query the mode list as soon as it sees the link status
BAD in order for the kernel to validate the modes again based on new link
paarmeters and send a new mode list back.

Seems like a bad behaviour from the kernel, isn't it? It should return
immediatly
if the mode is gonna be pruned :s

The mode list pruning isn't relevant for modeesets, the kernel doesn't
validate requested modes against that. The mode list is purely for
userspace's information. Which means updating it only when userspace asks
for it is fine.


Hmm, ok. Fair enough!


I also thought some more about the loop when we're stuck on BAD, and I
think it shouldn't happen: When the link goes BAD we update the link
parameter values to the new limits, and the kernel will reject any mode
that won't fit anymore. Of course you might be unlucky, and the new link
limits are also not working, but eventually you're stuck at the "you're
link is broken, sry" stage, where the kernel rejects everything :-)

So I think the busy-loop has strictly a limited amount of time until it
runs out of steam.


OK, I have given it more thoughts and discussed with Ville and Chris IRL or
on IRC.

My current proposal is based on the idea that the kernel should reject a 
mode

it knows it cannot set. This is already largely tested in real life: Try
setting 4K@60Hz on an HDMI 1.x monitor, it should immediately fail on
prepare(). For this proposal to work, we would need to put in the ABI that a
driver that sets the link-status to BAD should also make sure that whatever
the userspace does, no infinite loop should be created (by changing the
maximum link characteristics before setting the link-status property).

Re-probing the list of modes

Re: [RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-02-02 Thread Martin Peres

On 01/02/17 22:05, Manasi Navare wrote:

On Wed, Feb 01, 2017 at 11:58:16AM -0800, Eric Anholt wrote:

Jani Nikula <jani.nik...@linux.intel.com> writes:


On Tue, 31 Jan 2017, Eric Anholt <e...@anholt.net> wrote:

Martin Peres <martin.pe...@linux.intel.com> writes:


Despite all the careful planing of the kernel, a link may become
insufficient to handle the currently-set mode. At this point, the
kernel should mark this particular configuration as being broken
and potentially prune the mode before setting the offending connector's
link-status to BAD and send the userspace a hotplug event. This may
happen right after a modeset or later on.

When available, we should use the link-status information to reset
the wanted mode.

Signed-off-by: Martin Peres <martin.pe...@linux.intel.com>

If I understand this right, there are two failure modes being handled:

1) A mode that won't actually work because the link isn't good enough.

2) A mode that should work, but link parameters were too optimistic and
if we just ask the kernel to set the mode again it'll use more
conservative parameters that work.

This patch seems good for 2).  For 1), the drmmode_set_mode_major is
going to set our old mode back.  Won't the modeset then fail to link
train again, and bring us back into this loop?  The only escape that I
see would be some other userspace responding to the advertised mode list
changing, and then asking X to modeset to something new.

To avoid that failure busy loop, should we re-fetching modes at this
point, and only re-setting if our mode still exists?

Disclaimer: I don't know anything about the internals of the modesetting
driver.

Perhaps we can identify the two cases now, but I'd put this more
generally: if the link status has gone bad, it's an indicator to
userspace that the circumstances may have changed, and userspace should
query the kernel for the list of available modes again. It should no
longer trust information obtained prior to getting the bad link status,
including the current mode.

But specifically, I think you're right, and AFAICT asking for the list
of modes again is the only way for the userspace to distinguish between
the two cases. I don't think there's a shortcut for deciding the current
mode is still valid.

To avoid the busy-loop problem, I think I'd like this patch to re-query
the kernel to ask about the current mode list, and only try to re-set
the mode if our mode is still there.

If the mode isn't there, then it's up to the DE to take action in
response to the notification of new modes.  If you don't have a DE to
take appropriate action, you're kind of out of luck.

As far as the ABI goes, this seems fine to me.  The only concern I had
about ABI was having to walk all the connectors on every uevent to see
if any had gone bad -- couldn't we have a flag of some sort about what
the uevent indicates?  But uevents should be super rare, so I'd say the
kernel could go ahead with the current plan.

Yes I agree. The kernel sets the link status BAD as soona s link training fails
but does not prune the modes until a new modelist is requested by the userspace.
So this patch should re query the mode list as soon as it sees the link status
BAD in order for the kernel to validate the modes again based on new link
paarmeters and send a new mode list back.


Seems like a bad behaviour from the kernel, isn't it? It should return 
immediatly

if the mode is gonna be pruned :s

With the behaviour you are talking about, I should see 2 uevents when 
injecting one
BAD link-state (first uevent generates a new modeset that will then 
generate a BAD
state and another uevent, but this time the mode will have been pruned 
so when
-modesetting tries to set the mode, it will fail immediatly). During my 
testing, I do
not remember seeing such behaviour, so the kernel seemed to be doing the 
right thing
from my PoV (failing a modeset to a mode that is known not to be 
achievable). I can

verify tommorow, but it would be nice to make sure it is part of the ABI.

As for re-fetching the modes, this is something the DE will do anyway 
when asking
for them via randr. So, really, that will generate double probing in the 
common
case for what seems to be a workaround. Given that probing can be a 
super expensive
operation (request EDID from all monitors, potentially first starting up 
powered-down
GPUs such as NVIDIA or AMD), I would say that probing should not be 
taken lightly.


Isn't it possible to just return an error from the kernel if the mode 
should disapear?
As far as my testing goes, this was already what seemed to be 
happening... but I may be
wrong, especially since my DP monitor was fine with no link training 
whatsoever. What
is the current ABI for the userspace requesting a mode from a previous 
monitor to a new

one, because it did not reprobe?

In any case, this is a good discussion to have!

Remember that it could still not prune any mode if the same mode is valid
with lower

[RFC PATCH xserver] modesetting: re-set the crtc's mode when link-status goes BAD

2017-01-26 Thread Martin Peres
Despite all the careful planing of the kernel, a link may become
insufficient to handle the currently-set mode. At this point, the
kernel should mark this particular configuration as being broken
and potentially prune the mode before setting the offending connector's
link-status to BAD and send the userspace a hotplug event. This may
happen right after a modeset or later on.

When available, we should use the link-status information to reset
the wanted mode.

Signed-off-by: Martin Peres <martin.pe...@linux.intel.com>
---

WARNING: The patches have not been merged in the kernel yet, so this patch is
merely an RFC.

This patch is the result of discussions happening mostly in DRI-devel and
Intel-GFX on how to handle link training failures. I would advise reading the
thread [0] first and then this thread [1] which explain in great length why this
is needed and why the selected approach seems to be the best.

The relevant kernel patches + this patch are enough to pass the relevant
DisplayPort compliance tests, provided that the Desktop Environment or another
program ([2]?) provides the initial modesetting on hotplug.

Would this patch be acceptable to you? Any comments or suggestions?

[0] https://lists.freedesktop.org/archives/dri-devel/2016-November/123366.html
[1] https://lists.freedesktop.org/archives/dri-devel/2016-November/124736.html
[2] https://cgit.freedesktop.org/~mperes/auto-randr/


 hw/xfree86/drivers/modesetting/drmmode_display.c | 51 
 1 file changed, 51 insertions(+)

diff --git a/hw/xfree86/drivers/modesetting/drmmode_display.c 
b/hw/xfree86/drivers/modesetting/drmmode_display.c
index 6e755e9482..3144620c67 100644
--- a/hw/xfree86/drivers/modesetting/drmmode_display.c
+++ b/hw/xfree86/drivers/modesetting/drmmode_display.c
@@ -2262,6 +2262,10 @@ drmmode_setup_colormap(ScreenPtr pScreen, ScrnInfoPtr 
pScrn)
 }
 
 #ifdef CONFIG_UDEV_KMS
+
+#define DRM_MODE_LINK_STATUS_GOOD   0
+#define DRM_MODE_LINK_STATUS_BAD1
+
 static void
 drmmode_handle_uevents(int fd, void *closure)
 {
@@ -2281,6 +2285,49 @@ drmmode_handle_uevents(int fd, void *closure)
 if (!found)
 return;
 
+/* Try to re-set the mode on all the connectors with a BAD link-state:
+ * This may happen if a link degrades and a new modeset is necessary, using
+ * different link-training parameters. If the kernel found that the current
+ * mode is not achievable anymore, it should have pruned the mode before
+ * sending the hotplug event. Try to re-set the currently-set mode to keep
+ * the display alive, this will fail if the mode has been pruned.
+ * In any case, we will send randr events for the Desktop Environment to
+ * deal with it, if it wants to.
+ */
+for (i = 0; i < config->num_output; i++) {
+xf86OutputPtr output = config->output[i];
+drmmode_output_private_ptr drmmode_output = output->driver_private;
+uint32_t con_id = drmmode_output->mode_output->connector_id;
+drmModeConnectorPtr koutput;
+
+/* Get an updated view of the properties for the current connector and
+ * look for the link-status property
+ */
+koutput = drmModeGetConnectorCurrent(drmmode->fd, con_id);
+for (j = 0; koutput && j < koutput->count_props; j++) {
+drmModePropertyPtr props;
+props = drmModeGetProperty(drmmode->fd, koutput->props[j]);
+if (props && props->flags & DRM_MODE_PROP_ENUM &&
+!strcmp(props->name, "link-status") &&
+koutput->prop_values[j] == DRM_MODE_LINK_STATUS_BAD) {
+xf86CrtcPtr crtc = output->crtc;
+if (!crtc)
+continue;
+
+/* the connector got a link failure, re-set the current mode */
+drmmode_set_mode_major(crtc, >mode, crtc->rotation,
+   crtc->x, crtc->y);
+
+xf86DrvMsg(scrn->scrnIndex, X_WARNING,
+   "hotplug event: connector %u's link-state is BAD, "
+   "tried resetting the current mode. You may be left"
+   "with a black screen if this fails...\n", con_id);
+}
+drmModeFreeProperty(props);
+}
+drmModeFreeConnector(koutput);
+}
+
 mode_res = drmModeGetResources(drmmode->fd);
 if (!mode_res)
 goto out;
@@ -2345,6 +2392,10 @@ out_free_res:
 out:
 RRGetInfo(xf86ScrnToScreen(scrn), TRUE);
 }
+
+#undef DRM_MODE_LINK_STATUS_BAD
+#undef DRM_MODE_LINK_STATUS_GOOD
+
 #endif
 
 void
-- 
2.11.0

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH 0/2] drm: link status property and DP link training failure handling

2017-01-20 Thread Martin Peres

On 20/01/17 18:44, Jani Nikula wrote:

On Fri, 20 Jan 2017, Martin Peres <martin.pe...@linux.intel.com> wrote:

On 19/01/17 11:18, Jani Nikula wrote:

On Wed, 18 Jan 2017, Martin Peres <martin.pe...@linux.intel.com> wrote:

On 16/12/16 15:48, Daniel Vetter wrote:

On Fri, Dec 16, 2016 at 12:29:05PM +0200, Jani Nikula wrote:

The two remaining patches from [1], rebased.

BR,
Jani.


[1] 
http://mid.mail-archive.com/1480984058-552-1-git-send-email-manasi.d.navare@intel.com


Just for the record, I think the only thing missing here is the Xorg
review on the -modesetting patch. As soon as we have that I can vacuum
this up (probably best through drm-misc, but not sure).


Hey Daniel,

I tested again on Monday -modesetting with the patch from Jani to inject
faults and did manage to get both the link-status BAD and a lower
resolution got select dynamically when running KDE. For the latter, I
however needed the following patch:
https://patchwork.kernel.org/patch/9491869/

Now, that being said, Jani's patch just prevents a new modeset to work,
it does not tear down the current mode. This may be the reason why I do
not manage to get a black screen after > 3 failures (and already a
1024x768 resolution).

I however need to do more testing when running without a DE (straight X
+ twm and xterm). Indeed, when I hotplug my DP screen, it gets to the
native resolution automatically without me asking for it using xrandr.
Also, the mode that is set does not seem to go through
intel_dp_start_link_train (what the heck?), so I do not get any failure
and I cannot induce one :s


Huh, does your X + twm actually respond to hotplugs?



No, that was the point of the test :) I just wanted to see the screen
turn black but it did not because even if the link training fails, i915
keeps on going and pushes the pixels out constantly. My screen was
good-enough to pick it up and display it without complaining ... which
is really surprising for DP, isn't it?


And this, my friend, is why we plunge on in spite of errors, instead of
failing fast and bailing out. If you were a normal user, you'd
*appreciate* not having a black screen! ;)


I definitely would *love* this approach. Just was confusing ;) Thanks a 
lot for yesterday's discussion!


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH 0/2] drm: link status property and DP link training failure handling

2017-01-20 Thread Martin Peres

On 19/01/17 11:18, Jani Nikula wrote:

On Wed, 18 Jan 2017, Martin Peres <martin.pe...@linux.intel.com> wrote:

On 16/12/16 15:48, Daniel Vetter wrote:

On Fri, Dec 16, 2016 at 12:29:05PM +0200, Jani Nikula wrote:

The two remaining patches from [1], rebased.

BR,
Jani.


[1] 
http://mid.mail-archive.com/1480984058-552-1-git-send-email-manasi.d.navare@intel.com


Just for the record, I think the only thing missing here is the Xorg
review on the -modesetting patch. As soon as we have that I can vacuum
this up (probably best through drm-misc, but not sure).


Hey Daniel,

I tested again on Monday -modesetting with the patch from Jani to inject
faults and did manage to get both the link-status BAD and a lower
resolution got select dynamically when running KDE. For the latter, I
however needed the following patch:
https://patchwork.kernel.org/patch/9491869/

Now, that being said, Jani's patch just prevents a new modeset to work,
it does not tear down the current mode. This may be the reason why I do
not manage to get a black screen after > 3 failures (and already a
1024x768 resolution).

I however need to do more testing when running without a DE (straight X
+ twm and xterm). Indeed, when I hotplug my DP screen, it gets to the
native resolution automatically without me asking for it using xrandr.
Also, the mode that is set does not seem to go through
intel_dp_start_link_train (what the heck?), so I do not get any failure
and I cannot induce one :s


Huh, does your X + twm actually respond to hotplugs?



No, that was the point of the test :) I just wanted to see the screen 
turn black but it did not because even if the link training fails, i915 
keeps on going and pushes the pixels out constantly. My screen was 
good-enough to pick it up and display it without complaining ... which 
is really surprising for DP, isn't it?

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH 0/2] drm: link status property and DP link training failure handling

2017-01-20 Thread Martin Peres

On 19/01/17 13:34, Ville Syrjälä wrote:

On Wed, Jan 18, 2017 at 11:05:18PM +0200, Martin Peres wrote:

On 16/12/16 15:48, Daniel Vetter wrote:

On Fri, Dec 16, 2016 at 12:29:05PM +0200, Jani Nikula wrote:

The two remaining patches from [1], rebased.

BR,
Jani.


[1] 
http://mid.mail-archive.com/1480984058-552-1-git-send-email-manasi.d.navare@intel.com


Just for the record, I think the only thing missing here is the Xorg
review on the -modesetting patch. As soon as we have that I can vacuum
this up (probably best through drm-misc, but not sure).


Hey Daniel,

I tested again on Monday -modesetting with the patch from Jani to inject
faults and did manage to get both the link-status BAD and a lower
resolution got select dynamically when running KDE. For the latter, I
however needed the following patch:
https://patchwork.kernel.org/patch/9491869/

Now, that being said, Jani's patch just prevents a new modeset to work,
it does not tear down the current mode. This may be the reason why I do
not manage to get a black screen after > 3 failures (and already a
1024x768 resolution).

I however need to do more testing when running without a DE (straight X
+ twm and xterm). Indeed, when I hotplug my DP screen, it gets to the
native resolution automatically without me asking for it using xrandr.
Also, the mode that is set does not seem to go through
intel_dp_start_link_train (what the heck?), so I do not get any failure
and I cannot induce one :s


Presumably you're already pushing pixels out at the same resolution and
the monitor just syncs up to it. I think depending on the monitor that
might happen w/ or w/o link retraining.


Thanks a lot Ville, this is exactly what was going on!

I worked yesterday and today on a tool that would respond to randr 
events and perform a modeset to the highest mode available. As far as I 
can tell, it works exactly as expected!


I do not use the link-status much in -modesetting, aside from doing a 
fast-retraining in case of a BAD state, but I definitely proved that 
this kernel tree[0], this xserver patch[1] and this tool[2] are enough 
to handle gracefully link degradations and prune modes as they become 
unstable.


Please review the general idea and I will send the cleaned -modesetting 
patch as soon as we all agree. I will let Manasi do more testing with it 
and report back to us.


Thanks everyone for your help!

Martin

[0] https://cgit.freedesktop.org/~mperes/linux/log/?h=dp-compliance
[1] 
https://cgit.freedesktop.org/~mperes/xserver/commit/?h=dpcompliance=ce03d856c1121ca475cc88b9c64c2d2b95fa20bc

[2] https://cgit.freedesktop.org/~mperes/auto-randr
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH 0/2] drm: link status property and DP link training failure handling

2017-01-18 Thread Martin Peres

On 16/12/16 15:48, Daniel Vetter wrote:

On Fri, Dec 16, 2016 at 12:29:05PM +0200, Jani Nikula wrote:

The two remaining patches from [1], rebased.

BR,
Jani.


[1] 
http://mid.mail-archive.com/1480984058-552-1-git-send-email-manasi.d.navare@intel.com


Just for the record, I think the only thing missing here is the Xorg
review on the -modesetting patch. As soon as we have that I can vacuum
this up (probably best through drm-misc, but not sure).


Hey Daniel,

I tested again on Monday -modesetting with the patch from Jani to inject 
faults and did manage to get both the link-status BAD and a lower 
resolution got select dynamically when running KDE. For the latter, I 
however needed the following patch: 
https://patchwork.kernel.org/patch/9491869/


Now, that being said, Jani's patch just prevents a new modeset to work, 
it does not tear down the current mode. This may be the reason why I do 
not manage to get a black screen after > 3 failures (and already a 
1024x768 resolution).


I however need to do more testing when running without a DE (straight X 
+ twm and xterm). Indeed, when I hotplug my DP screen, it gets to the 
native resolution automatically without me asking for it using xrandr. 
Also, the mode that is set does not seem to go through 
intel_dp_start_link_train (what the heck?), so I do not get any failure 
and I cannot induce one :s


Because of this, I am a little uneasy and cannot say for sure that my 
patch for -modesetting is correct. I am hoping that Manasi will get a 
good testing rig to validate everything from end-to-end. At the very 
least, -modesetting seems to do a reasonable job.


Sorry for keeping you out of the loop, but I was hoping to have a more 
clear-cut GOOD/BAD, but I seem to be mostly at a road block. Manasi told 
me it would help if I described it here, so here I am. I will continue 
assisting Manasi in her validation work!


Martin

PS: My kernel tree is here: https://cgit.freedesktop.org/~mperes/linux
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC 0/6] drm: Add support for userspace drivers

2017-01-04 Thread Martin Peres
On 04/01/17 17:06, Daniel Vetter wrote:
> On Wed, Jan 04, 2017 at 02:34:36PM +0100, Noralf Trønnes wrote:
>> Hi,
>>
>> I was previously working on tinydrm as a replacement for staging/fbtft.
>> During a break from that work, I started to think about if it would be
>> possible to move the drivers to userspace instead. No point in having
>> them in the kernel if it's not necessary.
>>
>> This patchset includes all the pieces necessary to get a userspace
>> driver[1] working that is compatible with the fbtft/fb_ili9341 driver.
>> It is tested with a SPI interfaced display hat/shield for the Raspberry Pi,
>> which has an eeprom with a Device Tree fragment on it (merged by the
>> bootloader). With the help of udev and systemd, the driver is able to
>> autoload when the display is connected.
>> Performance wise, the kernel driver can do ~19fps on a 320x240 spi at 32MHz
>> display, whereas the userspace version does ~18fps. So performance is not
>> an argument to keep the driver in the kernel.
>>
>> What do you think about this approach?
>
> It's a pretty nifty thing, and iirc some of the ideas for implementing
> miracast centered around the same idea: Small userspace drm driver shim
> that forwards everything to the miracast code running all in userspace.
> David Herrmann looked into that years ago, not sure he ever got the full
> stack up

This is actually something David Herrmann told us not to do, stating 
that it was too hard for Desktop Environments to select which GPUs to 
output to. I however disagreed as this is something they need to support 
anyway for USB GPUs and miracast could piggy back on this effort.

An engineer in Intel Finland did work on this and we got approval for 
Open Sourcing it but I really have no time for this right now though :s

That was my 2 cents.

Martin


Implementing Miracast

2017-01-03 Thread Martin Peres
On 03/01/17 22:47, Jani Nikula wrote:
> On Fri, 23 Dec 2016, norbert  wrote:
>> Hello,
>> about a year ago there was a discussion about Implementing Miracast on
>> this list:
>>
>> https://lists.freedesktop.org/archives/dri-devel/2015-December/096035.html
>>
>> Since then I could not find further information about that topic there.
>> So maybe someone can answer this question:
>> Is that implementation  expected to be available in the near future or
>> did that development stop?
>>
>> Thanks
>> Norbert Wegener
>
> Martin, do you know more?

Hey Norbert,

Good to see interest from you on this aspect. The code is available 
internally and we went through our process to Open Source it. I now have 
it available but need to re-write some parts so as to be able to have it 
under the MIT license, before I can send an RFC.

The project is not cancelled, we just have a lot of things on our plates.

Thanks for your interest!

Martin


[PATCH] drm/nouveau: fix LEDS_CLASS=m configuration

2016-11-08 Thread Martin Peres
On 08/11/16 15:56, Arnd Bergmann wrote:
> The newly introduced LED handling for nouveau fails to link when the
> driver is built-in but the LED subsystem is a loadable module:
>
> drivers/gpu/drm/nouveau/nouveau.o: In function `nouveau_do_suspend':
> tvnv17.c:(.text.nouveau_do_suspend+0x10): undefined reference to 
> `nouveau_led_suspend'
> drivers/gpu/drm/nouveau/nouveau.o: In function `nouveau_do_resume':
> tvnv17.c:(.text.nouveau_do_resume+0xf0): undefined reference to 
> `nouveau_led_resume'
> drivers/gpu/drm/nouveau/nouveau.o: In function `nouveau_drm_unload':
> tvnv17.c:(.text.nouveau_drm_unload+0x34): undefined reference to 
> `nouveau_led_fini'
> drivers/gpu/drm/nouveau/nouveau.o: In function `nouveau_drm_load':
> tvnv17.c:(.text.nouveau_drm_load+0x7d0): undefined reference to 
> `nouveau_led_init'
>
> This adds a separate Kconfig symbol for the LED support that
> correctly tracks the dependencies.
>
> Fixes: 8d021d71b324 ("drm/nouveau/drm/nouveau: add a LED driver for the 
> NVIDIA logo")
> Signed-off-by: Arnd Bergmann 
> ---
>   drivers/gpu/drm/nouveau/Kbuild| 2 +-
>   drivers/gpu/drm/nouveau/Kconfig   | 8 
>   drivers/gpu/drm/nouveau/nouveau_led.h | 2 +-
>   3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/Kbuild b/drivers/gpu/drm/nouveau/Kbuild
> index fde6e3656636..5e00e911daa6 100644
> --- a/drivers/gpu/drm/nouveau/Kbuild
> +++ b/drivers/gpu/drm/nouveau/Kbuild
> @@ -22,7 +22,7 @@ nouveau-$(CONFIG_DEBUG_FS) += nouveau_debugfs.o
>   nouveau-y += nouveau_drm.o
>   nouveau-y += nouveau_hwmon.o
>   nouveau-$(CONFIG_COMPAT) += nouveau_ioc32.o
> -nouveau-$(CONFIG_LEDS_CLASS) += nouveau_led.o
> +nouveau-$(CONFIG_DRM_NOUVEAU_LED) += nouveau_led.o
>   nouveau-y += nouveau_nvif.o
>   nouveau-$(CONFIG_NOUVEAU_PLATFORM_DRIVER) += nouveau_platform.o
>   nouveau-y += nouveau_usif.o # userspace <-> nvif
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 78631fb61adf..715cd6f4dc31 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -46,6 +46,14 @@ config NOUVEAU_DEBUG
> The paranoia and spam levels will add a lot of extra checks which
> may potentially slow down driver operation.
>   
> +config DRM_NOUVEAU_LED
> + bool "Support for logo LED"
> + depends on DRM_NOUVEAU && LEDS_CLASS
> + depends on !(DRM_NOUVEAU=y && LEDS_CLASS=m)
> + help
> +   Say Y here to enabling controlling the brightness of the
> +   LED behind NVIDIA logo on certain Titan cards.

Titan is a little too specific and inaccurate. How about high-end cards?

As for the patch itself... I was going for the auto-magic approach but
apparently failed at it :s Sorry... I guess what I wanted to do was to
only enable the LED support if LEDS_CLASS=y.

Anyway, your approach is cleaner because it makes it easy for the user
to say if he/she wants to enable.

I will have a closer look at it later. Thanks anyway for reporting the 
issue!

Martin


XDC 2016 : Call for paper

2016-07-12 Thread Martin Peres
On 13/05/16 01:56, Martin Peres wrote:
> Hello,
>
> I have the pleasure to announce that the X.org Developer Conference 2016
> will be held in Helsinki from September 21 to September 23. The venue is
> located at Haaga-Helia university[0], next to the Pasila station.
>
> The official page for the event is http://www.x.org/wiki/Events/XDC2016
> while the call for paper is at http://www.x.org/wiki/Other/Press/CFP2016/

Friendly reminder that the deadline for the abstracts is in about a 
month (August 17th).

Please also remember to register for the conference by adding yourself 
to the attendees list[1] or asking me to do it.

See you there!

Martin

[1] http://www.x.org/wiki/Events/XDC2016
>
> As usual, we are open to talks across the layers of the graphics stack,
> from
> the kernel to desktop environments / graphical applications and about how
> to make things better for the developers who build them. If you're not sure
> if something might fit, mail me or add it to the ideas list found in the
> program page.
>
> The conference is free of charge and opened to the general public. If
> you plan on coming, please add yourself to the attendees list. We'll
> use this list to make badges and plan for the catering, so if you are
> attending please add your name as early as possible.
>
> I am looking forward to seeing you there, if you have any
> inquiries/questions, please send them to me (please also CC: board at
> foundation.x.org).
>
> Martin Peres
>
> [0]
> https://www.google.fi/maps/place/Ratapihantie+13,+00520+Helsinki/@60.2013725,24.931852,17z/data=!3m1!4b1!4m5!3m4!1s0x469209921fdba137:0x841c90a2862185af!8m2!3d60.2013725!4d24.9340407
>
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


XDC 2016 : Call for paper

2016-05-13 Thread Martin Peres
Hello,

I have the pleasure to announce that the X.org Developer Conference 2016
will be held in Helsinki from September 21 to September 23. The venue is
located at Haaga-Helia university[0], next to the Pasila station.

The official page for the event is http://www.x.org/wiki/Events/XDC2016
while the call for paper is at http://www.x.org/wiki/Other/Press/CFP2016/

As usual, we are open to talks across the layers of the graphics stack, from
the kernel to desktop environments / graphical applications and about how
to make things better for the developers who build them. If you're not sure
if something might fit, mail me or add it to the ideas list found in the
program page.

The conference is free of charge and opened to the general public. If
you plan on coming, please add yourself to the attendees list. We'll
use this list to make badges and plan for the catering, so if you are
attending please add your name as early as possible.

I am looking forward to seeing you there, if you have any
inquiries/questions, please send them to me (please also CC: board at
foundation.x.org).

Martin Peres

[0] 
https://www.google.fi/maps/place/Ratapihantie+13,+00520+Helsinki/@60.2013725,24.931852,17z/data=!3m1!4b1!4m5!3m4!1s0x469209921fdba137:0x841c90a2862185af!8m2!3d60.2013725!4d24.9340407



[PATCH 5/9] drm/i915: Enable i915 perf stream for Haswell OA unit

2016-05-04 Thread Martin Peres
On 03/05/16 23:03, Robert Bragg wrote:
>
>
> On Tue, May 3, 2016 at 8:34 PM, Robert Bragg  <mailto:robert at sixbynine.org>> wrote:
>
> Sorry for the delay replying to this, I missed it.
>
> On Sat, Apr 23, 2016 at 11:34 AM, Martin Peres  <mailto:martin.peres at free.fr>> wrote:
>
> On 20/04/16 17:23, Robert Bragg wrote:
>
> Gen graphics hardware can be set up to periodically write
> snapshots of
> performance counters into a circular buffer via its Observation
> Architecture and this patch exposes that capability to
> userspace via the
> i915 perf interface.
>
> Cc: Chris Wilson  <mailto:chris at chris-wilson.co.uk>>
> Signed-off-by: Robert Bragg  <mailto:robert at sixbynine.org>>
> Signed-off-by: Zhenyu Wang  <mailto:zhenyuw at linux.intel.com>>
> ---
>   drivers/gpu/drm/i915/i915_drv.h |  56 +-
>   drivers/gpu/drm/i915/i915_gem_context.c |  24 +-
>   drivers/gpu/drm/i915/i915_perf.c| 940
> +++-
>   drivers/gpu/drm/i915/i915_reg.h | 338 
>   include/uapi/drm/i915_drm.h |  70 ++-
>   5 files changed, 1408 insertions(+), 20 deletions(-)
>
> +
> +
> +   /* It takes a fairly long time for a new MUX
> configuration to
> +* be be applied after these register writes. This delay
> +* duration was derived empirically based on the
> render_basic
> +* config but hopefully it covers the maximum
> configuration
> +* latency...
> +*/
> +   mdelay(100);
>
>
> With such a HW and SW design, how can we ever expose hope to get any
> kind of performance when we are trying to monitor different
> metrics on each
> draw call? This may be acceptable for system monitoring, but it
> is problematic
> for the GL extensions :s
>
>
> Since it seems like we are going for a perf API, it means that
> for every change
> of metrics, we need to flush the commands, wait for the GPU to
> be done, then
> program the new set of metrics via an IOCTL, wait 100 ms, and
> then we may
> resume rendering ... until the next change. We are talking about
> a latency of
> 6-7 frames at 60 Hz here... this is non-negligeable...
>
>
> I understand that we have a ton of counters and we may hide
> latency by not
> allowing using more than half of the counters for every draw
> call or frame, but
> even then, this 100ms delay is killing this approach altogether.
>
>
>
>
> So revisiting this to double check how things fail with my latest
> driver/tests without the delay, I apparently can't reproduce test
> failures without the delay any more...
>
> I think the explanation is that since first adding the delay to the
> driver I also made the the driver a bit more careful to not forward
> spurious reports that look invalid due to a zeroed report id field, and
> that mechanism keeps the unit tests happy, even though there are still
> some number of invalid reports generated if we don't wait.
>
> One problem with simply having no delay is that the driver prints an
> error if it sees an invalid reports so I get a lot of 'Skipping
> spurious, invalid OA report' dmesg spam. Also this was intended more as
> a last resort mechanism, and I wouldn't feel too happy about squashing
> the error message and potentially sweeping other error cases under the
> carpet.
>
> Experimenting to see if the delay can at least be reduced, I brought the
> delay up in millisecond increments and found that although I still see a
> lot of spurious reports only waiting 1 or 5 milliseconds, at 10
> milliseconds its reduced quite a bit and at 15 milliseconds I don't seem
> to have any errors.
>
> 15 milliseconds is still a long time, but at least not as long as 100.

OK, so the issue does not come from the HW after all, great!

Now, my main question is, why are spurious events generated when 
changing the MUX's value? I can understand that we would need to ignore 
the reading that came right after the change, but other than this,  I am 
a bit at a loss.

I am a bit swamped with other tasks right now, but I would love to spend 
more time reviewing your code as I really want to see this upstream!

Martin


[Intel-gfx] [PATCH 5/9] drm/i915: Enable i915 perf stream for Haswell OA unit

2016-05-04 Thread Martin Peres
On 03/05/16 22:34, Robert Bragg wrote:
> Sorry for the delay replying to this, I missed it.

No worries!

>
> On Sat, Apr 23, 2016 at 11:34 AM, Martin Peres  <mailto:martin.peres at free.fr>> wrote:
>
> On 20/04/16 17:23, Robert Bragg wrote:
>
> Gen graphics hardware can be set up to periodically write
> snapshots of
> performance counters into a circular buffer via its Observation
> Architecture and this patch exposes that capability to userspace
> via the
> i915 perf interface.
>
> Cc: Chris Wilson  <mailto:chris at chris-wilson.co.uk>>
> Signed-off-by: Robert Bragg  <mailto:robert at sixbynine.org>>
> Signed-off-by: Zhenyu Wang  <mailto:zhenyuw at linux.intel.com>>
> ---
>   drivers/gpu/drm/i915/i915_drv.h |  56 +-
>   drivers/gpu/drm/i915/i915_gem_context.c |  24 +-
>   drivers/gpu/drm/i915/i915_perf.c| 940
> +++-
>   drivers/gpu/drm/i915/i915_reg.h | 338 
>   include/uapi/drm/i915_drm.h |  70 ++-
>   5 files changed, 1408 insertions(+), 20 deletions(-)
>
> +
> +
> +   /* It takes a fairly long time for a new MUX
> configuration to
> +* be be applied after these register writes. This delay
> +* duration was derived empirically based on the
> render_basic
> +* config but hopefully it covers the maximum configuration
> +* latency...
> +*/
> +   mdelay(100);
>
>
> With such a HW and SW design, how can we ever expose hope to get any
> kind of performance when we are trying to monitor different metrics
> on each
> draw call? This may be acceptable for system monitoring, but it is
> problematic
> for the GL extensions :s
>
>
> Since it seems like we are going for a perf API, it means that for
> every change
> of metrics, we need to flush the commands, wait for the GPU to be
> done, then
> program the new set of metrics via an IOCTL, wait 100 ms, and then
> we may
> resume rendering ... until the next change. We are talking about a
> latency of
> 6-7 frames at 60 Hz here... this is non-negligeable...
>
>
> I understand that we have a ton of counters and we may hide latency
> by not
> allowing using more than half of the counters for every draw call or
> frame, but
> even then, this 100ms delay is killing this approach altogether.
>
>
>
> Although I'm also really unhappy about introducing this delay recently,
> the impact of the delay is typically amortized somewhat by keeping a
> configuration open as long as possible.
>
> Even without this explicit delay here the OA unit isn't suited to being
> reconfigured on a per draw call basis, though it is able to support per
> draw call queries with the same config.
>
> The above assessment assumes wanting to change config between draw calls
> which is not something this driver aims to support - as the HW isn't
> really designed for that model.
>
> E.g. in the case of INTEL_performance_query, the backend keeps the i915
> perf stream open until all OA based query objects are deleted - so you
> have to be pretty explicit if you want to change config.

OK, I get your point. However, I still want to state that applications 
changing the set would see a disastrous effect as a 100 ms is enough to 
downclock both the CPU and GPU and that would dramatically alter the
metrics. Should we make it clear somewhere, either in the 
INTEL_performance_query or as a warning in mesa_performance if changing 
the set while running? I would think the latter would be preferable as 
it could also cover the case of the AMD extension which, IIRC, does not 
talk about the performance cost of changing the metrics. With this 
caveat made clear, it seems reasonable.

>
> Considering the sets available on Haswell:
> * Render Metrics Basic
> * Compute Metrics Basic
> * Compute Metrics Extended
> * Memory Reads Distribution
> * Memory Writes Distribution
> * Metric set SamplerBalance
>
> Each of these configs can expose around 50 counters as a set.
>
> A GL application is most likely just going to use the render basic set,
> and In the case of a tool like gputop/GPA then changing config would
> usually be driven by some user interaction to select a set of metrics,
> where even a 100ms delay will go unnoticed.

100 ms is becoming visible, but I agree, it would not be a show stopper 
for sure.

On the APITRACE side, this should not be an 

X.Org Endless Vacation of Code (EVoC) - Help

2016-04-30 Thread Martin Peres
On 30/04/16 11:07, Manish Bisht wrote:
> Hello Devs,
>
> I want to improve the design of http://www.x.org/wiki/ website and 
> want to change the user inerface with the material design theme and 
> make it mobile responsive. Can I make the proposal on this and take 
> part in the EVoC ?
>
> Waiting for replies. :)
>
> Regards,
> Manish Bisht
> Founder/CEO
> Run4Offers.com

Hey,

I am afraid this is not the goal of the EVoC. The goal is to get people 
into the development of the Open Graphics Stack. Your proposal is to 
work on our infrastructure which looks like you are looking for a bounty.

Sorry,
Martin


[PATCH 5/9] drm/i915: Enable i915 perf stream for Haswell OA unit

2016-04-23 Thread Martin Peres
On 20/04/16 17:23, Robert Bragg wrote:
> Gen graphics hardware can be set up to periodically write snapshots of
> performance counters into a circular buffer via its Observation
> Architecture and this patch exposes that capability to userspace via the
> i915 perf interface.
>
> Cc: Chris Wilson 
> Signed-off-by: Robert Bragg 
> Signed-off-by: Zhenyu Wang 
> ---
>   drivers/gpu/drm/i915/i915_drv.h |  56 +-
>   drivers/gpu/drm/i915/i915_gem_context.c |  24 +-
>   drivers/gpu/drm/i915/i915_perf.c| 940 
> +++-
>   drivers/gpu/drm/i915/i915_reg.h | 338 
>   include/uapi/drm/i915_drm.h |  70 ++-
>   5 files changed, 1408 insertions(+), 20 deletions(-)
>
> +
> +
> + /* It takes a fairly long time for a new MUX configuration to
> +  * be be applied after these register writes. This delay
> +  * duration was derived empirically based on the render_basic
> +  * config but hopefully it covers the maximum configuration
> +  * latency...
> +  */
> + mdelay(100);

With such a HW and SW design, how can we ever expose hope to get any
kind of performance when we are trying to monitor different metrics on each
draw call? This may be acceptable for system monitoring, but it is 
problematic
for the GL extensions :s

Since it seems like we are going for a perf API, it means that for every 
change
of metrics, we need to flush the commands, wait for the GPU to be done, then
program the new set of metrics via an IOCTL, wait 100 ms, and then we may
resume rendering ... until the next change. We are talking about a 
latency of
6-7 frames at 60 Hz here... this is non-negligeable...

I understand that we have a ton of counters and we may hide latency by not
allowing using more than half of the counters for every draw call or 
frame, but
even then, this 100ms delay is killing this approach altogether.

To be honest, if it indeed is an HW bug, then the approach that Samuel 
Pitoiset
and I used for Nouveau involving pushing an handle representing a
pre-computed configuration to the command buffer so as a software method
can be ask the kernel to reprogram the counters with as little idle time as
possible, would be useless as waiting for the GPU to be idle would 
usually not
take more than a few ms... which is nothing compared to waiting 100ms.

So, now, the elephant in the room, how can it take that long to apply the
change? Are the OA registers double buffered (NVIDIA's are, so as we can
reconfigure and start monitoring multiple counters at the same time)?

Maybe this 100ms is the polling period and the HW does not allow changing
the configuration in the middle of a polling session. In this case, this 
delay
should be dependent on the polling frequency. But even then, I would really
hope that the HW would allow us to tear down everything, reconfigure and
start polling again without waiting for the next tick. If not possible, 
maybe we
can change the frequency for the polling clock to make the polling event 
happen
sooner.

HW delays are usually a few microseconds, not milliseconds, that really 
suggests
that something funny is happening and the HW design is not understood 
properly.
If the documentation has nothing on this and the HW teams cannot help, 
then I
suggest a little REing session. I really want to see this work land, but 
the way I see
it right now is that we cannot rely on it because of this bug. Maybe 
fixing this bug
would require changing the architecture, so better address it before 
landing the
patches.

Worst case scenario, do not hesitate to contact me if non of the proposed
explanation pans out, I will take the time to read through the OA 
material and try my
REing skills on it. As I said, I really want to see this upstream!

Sorry...

Martin


[PATCH] drm/nouveau/nvbios/iccsense: fix semicolon.cocci warnings

2016-04-13 Thread Martin Peres
On 13/04/16 08:13, kbuild test robot wrote:
> drivers/gpu/drm/nouveau/nvkm/subdev/bios/iccsense.c:96:3-4: Unneeded semicolon
>
>
>   Remove unneeded semicolon.
>
> Generated by: scripts/coccinelle/misc/semicolon.cocci
>
> CC: Martin Peres 
> Signed-off-by: Fengguang Wu 
Signed-off-by: Martin Peres 


[PATCH] drm/nouveau/iccsense: fix ifnullfree.cocci warnings

2016-04-13 Thread Martin Peres
On 13/04/16 10:07, kbuild test robot wrote:
> drivers/gpu/drm/nouveau/nvkm/subdev/iccsense/base.c:133:2-7: WARNING: NULL 
> check before freeing functions like kfree, debugfs_remove, 
> debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider 
> reorganizing relevant code to avoid passing NULL values.
>
>   NULL check before some freeing functions is not needed.
>
>   Based on checkpatch warning
>   "kfree(NULL) is safe this check is probably not required"
>   and kfreeaddr.cocci by Julia Lawall.
>
> Generated by: scripts/coccinelle/free/ifnullfree.cocci
>
> CC: Karol Herbst 
> Signed-off-by: Fengguang Wu 
Signed-off-by: Martin Peres 


Implementing Miracast?

2015-12-10 Thread Martin Peres
On 08/12/15 19:24, David Herrmann wrote:
> Hi
>
> On Tue, Dec 8, 2015 at 5:39 PM, Martin Peres  wrote:
>> On 08/12/15 13:59, David Herrmann wrote:
>>> I looked into all this when working on WFD, but I cannot recommend
>>> going down that road. First of all, you still need heavy modifications
>>> for gnome-shell, kwin, and friends, as neither of them supports
>>> seamless drm-device hotplugging.
>>
>>
>> That would still be needed for USB GPUs though. Seems like metacity had no
>> probs
>> in 2011, but no idea how heavily patched it was:
>> https://www.youtube.com/watch?v=g54y80blzRU
>>
>> Airlied?
>
> Yes, Xorg has offload-sinks. But if you target Xorg, then you can just
> as well implement user-space sinks in Xorg, and you're done. But given
> that you talk about "compositors" here, I assume you're targeting
> wayland compositors. Otherwise, there is really nothing to implement
> in Gnome and friends to make external displays work. Supporting it in
> Xorg would be enough.

We would like to have a solution that works for as many display systems 
as possible, X and Wayland are of course the main goal. Surface flinger 
support would be nice too but no idea how it works.

So, we tested the following case, 3 GPUs (Intel, Nouveau, Nouveau), 3 
screens each connected to a different GPU. Screen 0 uses Intel.
Then we ran:
xrandr --setprovideroutputsource 1 Intel
xrandr --setprovideroutputsource 2 Intel

And got the 3 screens exposed by screen 0. xrandr --auto then did 
exactly what it is supposed to do. So, for the X case, there is nothing 
else to do than run the setprovideroutputsource xrandr command to add 
the new miracast screen, after creating the node.

Now that I think of it, we did not try with the modesetting driver but 
we could always add support for it.

>
> Long story short: offload-sinks like UDL only work properly if you use
> Xorg (if my information is outdated, please correct me, but I haven't
> seen any multi-display-controller-support in clutter or kwin or even
> weston).

They will have to be fixed at some point if they want to support USB 
GPUs and Optimus (and miracast?). So, why require them to add specific 
code for Miracast?

Dave, what did you do to make it work automatically on metacity?

>
>> That is a fair proposal but that requires a lot more work for compositors
>> than
>> waiting for drm udev events and reusing all the existing infrastructure for
>> DRM
>> to drive the new type of display.
>
> This is not true. Again, I haven't seen any multi-display-support in
> any major compositors but Xorg (and even for Xorg I'm not entirely
> sure they support _fully_ independent display drivers, but airlied
> should know more).

Sounds about right, but as we said before, there are other important 
cases, Optimus being the most important one, that require this support. 
So, why not ride on this for the less-than-usual case which is Miracast?

>
>> I guess there are benefits to being able to output to a gstreamer backend,
>> but
>> the drm driver we propose could do just that without having to ask for a lot
>> of
>> new code, especially code that is already necessary for handling USB GPUs.
>> Moreover, the gstreamer backend would not be registered as a screen by X
>> which
>> means that games may not be able to set themselves fullscreen on this screen
>> only.
>>
>> I am open to the idea of having compositors render to a gstreamer backend,
>> but
>> I never worked with gstreamer myself so I have no clue about how suited it
>> is for
>> output management (resolution, refresh rate) and there is the added
>> difficulty of
>> the X model not working well with this approach. We will have a look at this
>> though.
>
> As I said earlier, I'm not opposed to a virtual DRM driver. I'm just
> saying that you should not expect it to work out-of-the-box in any
> major compositor. I spent a significant amount of time hacking on it,
> and my recommendation is to instead do all that in user-space. It'll
> be less work.

Yes, you are right, it will require changes for the non-X case.

Since you spent a lot of time on it, could you share with us some of the 
issues you found? We still think that using the DRM interface may be 
more work, but at least it would improve the state of the graphics stack.

Thanks,

Jaakko and Martin


Implementing Miracast?

2015-12-08 Thread Martin Peres
On 08/12/15 13:59, David Herrmann wrote:
> Hi
>
> On Fri, Dec 4, 2015 at 9:07 AM, Daniel Vetter  wrote:
>> On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
>>> You are right Ilia, this is indeed what Jaakko and I had in mind, but they
>>> did not re-use the fuse/cuse framework to do the serialization of the
>>> ioctls.
>>>
>>> Not sure what we can do against allowing proprietary drivers to use this
>>> feature though :s To be fair, nothing prevents any vendor to do this shim
>>> themselves and nvidia definitely did it, and directly called their
>>> closed-source driver.
>>>
>>> Any proposition on how to handle this case? I guess we could limit that to
>>> screens only, no rendering. That would block any serious GPU manufacturer
>>> from using this code even if any sane person would never write a driver in
>>> the userspace...
>> Hm for virtual devices like this I figured there's no point exporting the
>> full kms api to userspace, but instead we'd just need a simple kms driver
>> with just 1 crtc and 1 connector per drm_device. Plus a special device
>> node (v4l is probably inappropriate since it doesn't do damage) where the
>> miracast userspace can receive events with just the following information:
>> - virtual screen size
>> - fd to the underlying shmem node for the current fb. Or maybe a dma-buf
>>(but then we'd need the dma-buf mmap stuff to land first).
>> - damage tracking
>>
>> If we want fancy, we could allow userspace to reply (through an ioctl)
>> when it's done reading the previous image, which the kernel could then
>> forward as vblank complete events.
>>
>> Connector configuration could be done by forcing the outputs (we'll send
>> out uevents nowadays for that), so the only thing we need is some configfs
>> to instantiate new copies of this.
>>
>> At least for miracst (as opposed to full-blown hw drivers in userspace) I
>> don't think we need to export everything.
> I looked into all this when working on WFD, but I cannot recommend
> going down that road. First of all, you still need heavy modifications
> for gnome-shell, kwin, and friends, as neither of them supports
> seamless drm-device hotplugging.

That would still be needed for USB GPUs though. Seems like metacity had 
no probs
in 2011, but no idea how heavily patched it was:
https://www.youtube.com/watch?v=g54y80blzRU

Airlied?

> Hence, providing more devices than
> the main GPU just confuses them. Secondly, you really don't win much
> by re-using DRM for all that. On the contrary, you get very heavy
> overhead, need to feed all this through limited ioctl interfaces, and
> fake DRM crtcs/encoders/connectors, when all you really have is an
> mpeg stream.
The overhead is just at init time, is that really relevant? The only 
added cost
could then be the page flip ioctl which is not really relevant again 
since it is only up
to 60 times per second in usual monitors.

> I wouldn't mind if anyone writes a virtual DRM interface, it'd be
> really great for automated testing. However, if you want your
> wifi-display (or whatever else) integrated into desktop environments,
> then I recommend teaching those environments to accept gstreamer sinks
> as outputs.

That is a fair proposal but that requires a lot more work for 
compositors than
waiting for drm udev events and reusing all the existing infrastructure 
for DRM
to drive the new type of display.

I guess there are benefits to being able to output to a gstreamer 
backend, but
the drm driver we propose could do just that without having to ask for a 
lot of
new code, especially code that is already necessary for handling USB GPUs.
Moreover, the gstreamer backend would not be registered as a screen by X 
which
means that games may not be able to set themselves fullscreen on this 
screen only.

I am open to the idea of having compositors render to a gstreamer 
backend, but
I never worked with gstreamer myself so I have no clue about how suited 
it is for
output management (resolution, refresh rate) and there is the added 
difficulty of
the X model not working well with this approach. We will have a look at 
this though.

Martin


dumb BOs and prime

2015-12-04 Thread Martin Peres
On 04/12/15 19:49, Rob Herring wrote:
> I'm working on getting Android working with DRM drivers. ATM, I'm
> using virtio-gpu as the driver and trying to get just KMS side working
> without rendering. I have it working with stock AOSP and the emulated
> fb with a few additions to the virtio-gpu driver[1]. Now I'm trying to
> get things working with native KMS using drm_gralloc and
> drm_hwcomposer (now in AOSP). I've hit one problem though which I'm
> not sure how to solve without hacking around it.
>
> Is prime allowed on dumb BOs? AIUI, dumb buffer allocation is not
> allowed on render nodes and drmPrimeHandleToFD is not allowed on
> card0, so I'm stuck. I could open both nodes, but then I want the case
> of no render node to work. After some searching, I thought it was a
> matter of needing to do drmAuthMagic, but then found that is
> considered obsolete[2].

Obsolete when using render nodes, but still necessary on usual nodes 
(/dev/dri/cardX) as far as I remember. The usual nodes can do everything 
the render nodes can do.

Authentication should help! Please make sure you are master or 
authenticated before doing anything on the usual nodes.

>
> Rob
>
> [1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git android-4.4
> [2] 
> http://www.x.org/wiki/Events/XDC2013/XDC2013DavidHerrmannDRMSecurity/slides.pdf
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>


Implementing Miracast?

2015-12-04 Thread Martin Peres
On 04/12/15 10:07, Daniel Vetter wrote:
>
> Hm for virtual devices like this I figured there's no point exporting the
> full kms api to userspace, but instead we'd just need a simple kms driver
> with just 1 crtc and 1 connector per drm_device.

Yes, we do not need anything more. But don't forget the requirement that 
we should be able to hotplug new gpus when new screens become available 
(there may be more than one). We thus need to create a node that when 
opened, would create a "screen" node that will be seen as a normal gpu 
by X and wayland compositors (cardX?). One userspace process will likely 
control all the miracast screens.

> Plus a special device
> node (v4l is probably inappropriate since it doesn't do damage) where the
> miracast userspace can receive events with just the following information:

Not sure it is a good idea as it would force compositors to learn about 
miracast, which is not necessary.

> - virtual screen size
> - fd to the underlying shmem node for the current fb. Or maybe a dma-buf
>(but then we'd need the dma-buf mmap stuff to land first).

Darn it, I was sure this had already landed. I guess it is OK as long as 
we expose a GEM interface that would allow us to import the dma-buf into 
a GEM buffer which we would then mmap through the usual API. Buffer 
allocation is not necessary though.

> - damage tracking
>
> If we want fancy, we could allow userspace to reply (through an ioctl)
> when it's done reading the previous image, which the kernel could then
> forward as vblank complete events.

Sounds good :)

>
> Connector configuration could be done by forcing the outputs (we'll send
> out uevents nowadays for that), so the only thing we need is some configfs
> to instantiate new copies of this.

Are you suggesting hotplugging connectors instead of GPUs? Not sure if 
compositors will like that :s

>
> At least for miracst (as opposed to full-blown hw drivers in userspace) I
> don't think we need to export everything.

We indeed do not need to export anything related to rendering!

> Cheers, Daniel
>

Thanks for your feedback Daniel!

Martin


Implementing Miracast?

2015-12-03 Thread Martin Peres
On 03/12/15 18:38, Ilia Mirkin wrote:
> On Thu, Dec 3, 2015 at 11:10 AM, Laurent Pinchart
>  wrote:
>> Hi Ilia,
>>
>> On Thursday 03 December 2015 11:03:28 Ilia Mirkin wrote:
>>> On Thu, Dec 3, 2015 at 10:53 AM, Laurent Pinchart wrote:
 On Thursday 03 December 2015 10:42:50 Ilia Mirkin wrote:
> On Thu, Dec 3, 2015 at 10:34 AM, Laurent Pinchart wrote:
>> On Thursday 03 December 2015 14:42:51 Hannikainen, Jaakko wrote:
>>> Hello,
>>>
>>> We're developing Miracast (HDMI over Wireless connections). The
>>> current progress is that it 'works' in the userspace but doesn't have
>>> any integration with X/Wayland and can only mirror the current desktop
>>> using gstreamer.
>>>
>>> We're looking into extending the implementation so that we would be
>>> able to use the remote screens just as any other connected screen, but
>>> we're not quite sure where we should implement it.
>>>
>>> The DRM interface seems like the perfect fit since we wouldn't need to
>>> patch every compositor.
>>>
>>> Right now, gstreamer is the equivalent of the crtc/encoder, in the DRM
>>> model. Screens / crtcs are discovered using a WiFi's p2p protocol
>>> which means that screens should be hotpluggable. Since we cannot
>>> change the number of crtcs of a driver on the fly, we propose adding
>>> and removing gpus with one crtc attached and no rendering
>>> capabilities.
>>>
>>> Compositors and X currently use udev to list gpus and get run-time
>>> events for gpu hot-plugging (see the work from Dave Airlie for USB
>>> GPUs, using the modesetting X driver). We did not find a way to tell
>>> udev that we have a new device and it seems like the only way to get
>>> it to pick up our driver is from a uevent which can only be generated
>>> from the kernel.
>>>
>>> Since we have so many userspace components, it doesn't make sense to
>>> implement the entire driver in the kernel.
>>>
>>> We would thus need to have a communication from the kernel space to
>>> the userspace at least to send the flip commands to the fake crtc.
>>> Since we need this, why not implement everything in the userspace and
>>> just redirect the ioctls to the userspace driver?
>>>
>>> This is exactly what fuse / cuse [1] does, with the minor catch that
>>> it creates devices in /sys/class/cuse instead of drm. This prevents
>>> the wayland compositors and X to pick it up as a normal drm driver...
>>>
>>> We would thus need to have the drm subsystem create the device nodes
>>> for us when the userspace needs to create a new gpu. We could create a
>>> node named /dev/dri/cuse_card that, when opened, would allocate a node
>>> (/dev/dri/cardX) and would use cuse/fuse to redirect the ioctls to the
>>> process who opened /dev/dri/cuse_card.
>>>
>>> The process would then be responsible for decoding the ioctl and
>>> implementing the drm API.
>>>
>>> Since this is a major change which would allow proprietary drivers to
>>> be implemented in the userspace and since we may have missed something
>>> obvious, we would like to start a discussion on this. What are your
>>> thoughts?
>> As you raise the issue, how would you prevent proprietary userspace
>> drivers to be implemented ? Anything that would allow vendors to
>> destroy the Linux graphics ecosystem would receive a big nack from me.
> AFAIK the displaylink people already have precisely such a driver -- a
> (open-source) kernel module that allows their (closed-source)
> userspace blob to present a drm node to pass through modesetting/etc
> ioctl's.
 Are you talking about the drivers/gpu/drm/udl/ driver ? I might be wrong
 but I'm not aware of that kernel driver requiring a closed-source
 userspace blob.
>>> Nope. That driver only works for their USB2 parts. This is what I mean:
>>>
>>> https://github.com/DisplayLink/evdi
>>> http://support.displaylink.com/knowledgebase/articles/679060
>>> http://support.displaylink.com/knowledgebase/articles/615714#ubuntu
>> Right. That's out-of-tree, people are free to screw up on their own there ;-)
> Sure, but it's identical to Jaakko's proposal from what I can
> (quickly) tell. And it's an example of someone taking an interface
> like that and writing a proprietary driver on top.
>
>-ilia
>

You are right Ilia, this is indeed what Jaakko and I had in mind, but 
they did not re-use the fuse/cuse framework to do the serialization of 
the ioctls.

Not sure what we can do against allowing proprietary drivers to use this 
feature though :s To be fair, nothing prevents any vendor to do this 
shim themselves and nvidia definitely did it, and directly called their 
closed-source driver.

Any proposition on how to handle this case? I guess we could limit that 
to screens only, no rendering. That would block any serious GPU 
manufacturer 

XDC2015 - Travel sponsorship

2015-07-31 Thread Martin Peres
Hello everyone,

We are 1.5 months away from XDC and 20 days away from the proposals 
deadline[1]!

If you did not manage to secure funding from your company but still 
think you could benefit the community by giving a talk, we encourage you 
to send an email to the board of X.Org with your talk proposal and we 
will review it in a timely fashion.

Looking forward to seeing a lot of you soon, in Toronto!

Martin, on behalf of the board of directors of X.Org

[1] http://www.x.org/wiki/Other/Press/CFP2015/


[Mesa-dev] [xorg 1/3] dri2: Allow GetBuffers to match any format

2015-06-16 Thread Martin Peres
On 20/01/15 22:49, Ian Romanick wrote:
> On 01/19/2015 03:00 AM, Chris Wilson wrote:
>> Since the introduction of DRI2GetBuffersWithFormat, the old
>> DRI2GetBuffers interface would always recreate all buffers all the time
>> as it was no longer agnostic to the format value being set by the DDXes.
>> This causes an issue with clients intermixing the two requests,
>> rendering any sharing or caching of buffers (e.g. for triple buffering)
>> void.
>>
>> Signed-off-by: Chris Wilson 
>> ---
>>   hw/xfree86/dri2/dri2.c | 13 -
>>   1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/hw/xfree86/dri2/dri2.c b/hw/xfree86/dri2/dri2.c
>> index 43a1899..f9f594d 100644
>> --- a/hw/xfree86/dri2/dri2.c
>> +++ b/hw/xfree86/dri2/dri2.c
>> @@ -464,14 +464,16 @@ find_attachment(DRI2DrawablePtr pPriv, unsigned 
>> attachment)
>>   static Bool
>>   allocate_or_reuse_buffer(DrawablePtr pDraw, DRI2ScreenPtr ds,
>>DRI2DrawablePtr pPriv,
>> - unsigned int attachment, unsigned int format,
>> + unsigned int attachment,
>> + int has_format, unsigned int format,
>>int dimensions_match, DRI2BufferPtr * buffer)
>>   {
>>   int old_buf = find_attachment(pPriv, attachment);
>>   
>>   if ((old_buf < 0)
>>   || attachment == DRI2BufferFrontLeft
>> -|| !dimensions_match || (pPriv->buffers[old_buf]->format != 
>> format)) {
>> +|| !dimensions_match
>> +|| (has_format && pPriv->buffers[old_buf]->format != format)) {
>>   *buffer = create_buffer(ds, pDraw, attachment, format);
> Shouldn't the create_buffer change if !has_format?  If !has_format and,
> say, !dimensions_match, create_buffer will get format = 0 when it should
> get format = pPriv->buffers[old_buf]->format.  Right?

This is still a problem in the current patchset that I have. Since the 
client did not specifically ask for a certain format, why not increase 
the likeliness of us being able to reuse the buffer later on by using a 
format that the application already asked before?

>
> Another alternative would be to have the caller always pass a format:
> either the format supplied in the protocol or the format of the old
> buffer.  That might be more messy.  Dunno.
>
>>   return TRUE;
>>   
>> @@ -549,7 +551,8 @@ do_get_buffers(DrawablePtr pDraw, int *width, int 
>> *height,
>>   const unsigned format = (has_format) ? *(attachments++) : 0;
>>   
>>   if (allocate_or_reuse_buffer(pDraw, ds, pPriv, attachment,
>> - format, dimensions_match, [i]))
>> + has_format, format, dimensions_match,
>> + [i]))
>>   buffers_changed = 1;
>>   
>>   if (buffers[i] == NULL)
>> @@ -584,7 +587,7 @@ do_get_buffers(DrawablePtr pDraw, int *width, int 
>> *height,
>>   
>>   if (need_real_front > 0) {
>>   if (allocate_or_reuse_buffer(pDraw, ds, pPriv, DRI2BufferFrontLeft,
>> - front_format, dimensions_match,
>> + has_format, front_format, 
>> dimensions_match,
>>[i]))
>>   buffers_changed = 1;
>>   
>> @@ -595,7 +598,7 @@ do_get_buffers(DrawablePtr pDraw, int *width, int 
>> *height,
>>   
>>   if (need_fake_front > 0) {
>>   if (allocate_or_reuse_buffer(pDraw, ds, pPriv, 
>> DRI2BufferFakeFrontLeft,
>> - front_format, dimensions_match,
>> + has_format, front_format, 
>> dimensions_match,
>>[i]))
>>   buffers_changed = 1;
>>   
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel



[Mesa-dev] [dri2proto] Declare DRI2ParamXHasBufferAge

2015-06-16 Thread Martin Peres
On 20/01/15 22:53, Ian Romanick wrote:
> On 01/19/2015 03:00 AM, Chris Wilson wrote:
>> In order for X/DDX to reuse a driver specific field of the DRI2GetBuffers
>> reply, we need to declare the change in semantics. To indicate that the
>> flags field now continues the last swap buffers count instead, we
>> introduce the has-buffer-age parameter.
>>
>> Signed-off-by: Chris Wilson 
> Reviewed-by: Ian Romanick 
>

Reviewed-by: Martin Peres 



[Nouveau] [PATCH v2] nouveau: add coherent BO attribute

2015-05-26 Thread Martin Peres
On 26/05/2015 16:23, Alexandre Courbot wrote:
> On Sun, May 24, 2015 at 3:26 PM, Maarten Lankhorst
>  wrote:
>> Op 23-05-15 om 08:45 schreef Alexandre Courbot:
>>> On Fri, May 22, 2015 at 3:23 AM, Martin Peres  
>>> wrote:
>>>> On 21/05/2015 11:47, Ben Skeggs wrote:
>>>>> On 21 May 2015 at 16:08, Alexandre Courbot  wrote:
>>>>>> Add a flag allowing Nouveau to specify that an object should be coherent
>>>>>> at allocation time. This is required for some class of objects like
>>>>>> fences which are randomly-accessed by both the CPU and GPU. This flag
>>>>>> instructs the kernel driver to make sure the object remains coherent
>>>>>> even on architectures for which coherency is not guaranteed by the bus.
>>>>>>
>>>>>> Signed-off-by: Alexandre Courbot 
>>>>> Reviewed-by: Ben Skeggs 
>>>> Pushed!
>>> Thanks! Should we also bump the ABI version or something?
>> If you mean changing libdrm_nouveau.so.2 to .so.3 no! If you mean having 
>> something that pkg-config can pick up that's done automatically on every drm 
>> release.
> Sorry for not having been clear. I was talking about checking against
> the driver version to ensure it supports all the features that libdrm
> can throw at it. We increased the modules's version to 1.2.2 when
> adding the coherent flag:
>
> drm/nouveau/nouveau_drm.h:
>   * 1.2.2:
>   *  - add NOUVEAU_GEM_DOMAIN_COHERENT flag

That will be the job of mesa to check the DRM version, I would guess.


[Nouveau] [PATCH v2] nouveau: add coherent BO attribute

2015-05-21 Thread Martin Peres
On 21/05/2015 11:47, Ben Skeggs wrote:
> On 21 May 2015 at 16:08, Alexandre Courbot  wrote:
>> Add a flag allowing Nouveau to specify that an object should be coherent
>> at allocation time. This is required for some class of objects like
>> fences which are randomly-accessed by both the CPU and GPU. This flag
>> instructs the kernel driver to make sure the object remains coherent
>> even on architectures for which coherency is not guaranteed by the bus.
>>
>> Signed-off-by: Alexandre Courbot 
> Reviewed-by: Ben Skeggs 

Pushed!


[PATCH] nouveau: add coherent BO attribute

2015-05-20 Thread Martin Peres
On 20/05/15 08:11, Alexandre Courbot wrote:
> On Fri, May 15, 2015 at 8:39 PM, Maarten Lankhorst
>  wrote:
>> Op 15-05-15 om 09:11 schreef Alexandre Courbot:
>>> Re-pinging Marteen on an email address that still exists :P
>>>
>>> On Wed, Apr 22, 2015 at 6:08 PM, Alexandre Courbot  
>>> wrote:
 On Sun, Mar 15, 2015 at 5:41 PM, Alexandre Courbot >>> nvidia.com> wrote:
> On 03/14/2015 04:33 AM, Maarten Lankhorst wrote:
>> Hey,
>>
>> Op 13-03-15 om 07:27 schreef Alexandre Courbot:
>>> Add a flag allowing Nouveau to specify that an object should be coherent
>>> at allocation time. This is required for some class of objects like
>>> fences which are randomly-accessed by both the CPU and GPU. This flag
>>> instructs the kernel driver to make sure the object remains coherent
>>> even on architectures for which coherency is not guaranteed by the bus.
>>>
>>> Signed-off-by: Alexandre Courbot 
>> I don't see a problem with this patch, but similar patches to intel to
>> libdrm have been shot down when the changes weren't in an official kernel
>> yet, so I think this should wait until the change is at least in 
>> drm-next.
>> ;-)
> Sounds good. I will ping you again once the kernel change reaches -next.
 Hi Marteen,

 The kernel change required for this patch is now in -next. Do you
 think we can merge it now?
>> I think it would be ok to merge now.
> Great - who could do this? :P

I could do it. Please provide me with the patch with the necessary R-b 
and I can push it to our libdrm (and/or mesa).


[PATCH] drm/radeon: use 0-255 rather than 0-100 for pwm fan range

2015-04-15 Thread Martin Peres
On 13/04/2015 02:42, Alex Deucher wrote:
> On Sat, Apr 11, 2015 at 5:52 AM, Martin Peres  wrote:
>> On 05/02/2015 11:49, Christian König wrote:
>>> Am 04.02.2015 um 23:27 schrieb Alex Deucher:
>>>> 0-255 seems to be the preferred range for the pwm interface.
>>>>
>>>> Signed-off-by: Alex Deucher 
>>> Yeah, using 100 on a 8bit pwm timer sounds rather obviously wrong.
>>>
>>> Patch is Reviewed-by: Christian König 
>>>
>>>> ---
>>>> drivers/gpu/drm/radeon/radeon_pm.c | 8 ++--
>>>> 1 file changed, 6 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/radeon/radeon_pm.c
>>>> b/drivers/gpu/drm/radeon/radeon_pm.c
>>>> index 91e1bd2..9f758d3 100644
>>>> --- a/drivers/gpu/drm/radeon/radeon_pm.c
>>>> +++ b/drivers/gpu/drm/radeon/radeon_pm.c
>>>> @@ -585,7 +585,7 @@ static ssize_t radeon_hwmon_set_pwm1_enable(struct
>>>> device *dev,
>>>>  if (err)
>>>>  return err;
>>>> -switch(value) {
>>>> +   switch (value) {
>>>>  case 1: /* manual, percent-based */
>>>>  rdev->asic->dpm.fan_ctrl_set_mode(rdev,
>>>> FDO_PWM_MODE_STATIC);
>>>>  break;
>>>> @@ -608,7 +608,7 @@ static ssize_t radeon_hwmon_get_pwm1_max(struct
>>>> device *dev,
>>>>   struct device_attribute *attr,
>>>>   char *buf)
>>>> {
>>>> -   return sprintf(buf, "%i\n", 100); /* pwm uses percent-based
>>>> fan-control */
>>>> +   return sprintf(buf, "%i\n", 255);
>>>> }
>>
>> This is not a standard name as it is not documented in
>> https://www.kernel.org/doc/Documentation/hwmon/sysfs-interface
>>
>> Apparently, the pwm value should always have been exposed as 0-255 and I
>> screwed it up in nouveau by having it be 0-100...
>>
>> Maybe we should ask pwm*_max to be defined so as new applications could do
>> the right thing while not having nouveau change its ABI. I guess it is ok to
>> change it for radeon as there are so few users currently but the interface
>> has been in nouveau for multiple years already!
>>
> IIRC, I changed it in the same kernel as the original patch went in.
>
> Alex
Wonderful!


[PATCH] drm/radeon: use 0-255 rather than 0-100 for pwm fan range

2015-04-11 Thread Martin Peres
On 05/02/2015 11:49, Christian König wrote:
> Am 04.02.2015 um 23:27 schrieb Alex Deucher:
>> 0-255 seems to be the preferred range for the pwm interface.
>>
>> Signed-off-by: Alex Deucher 
> Yeah, using 100 on a 8bit pwm timer sounds rather obviously wrong.
>
> Patch is Reviewed-by: Christian König 
>
>> ---
>>drivers/gpu/drm/radeon/radeon_pm.c | 8 ++--
>>1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/radeon/radeon_pm.c 
>> b/drivers/gpu/drm/radeon/radeon_pm.c
>> index 91e1bd2..9f758d3 100644
>> --- a/drivers/gpu/drm/radeon/radeon_pm.c
>> +++ b/drivers/gpu/drm/radeon/radeon_pm.c
>> @@ -585,7 +585,7 @@ static ssize_t radeon_hwmon_set_pwm1_enable(struct 
>> device *dev,
>>  if (err)
>>  return err;
>>
>> -switch(value) {
>> +switch (value) {
>>  case 1: /* manual, percent-based */
>>  rdev->asic->dpm.fan_ctrl_set_mode(rdev, FDO_PWM_MODE_STATIC);
>>  break;
>> @@ -608,7 +608,7 @@ static ssize_t radeon_hwmon_get_pwm1_max(struct device 
>> *dev,
>>   struct device_attribute *attr,
>>   char *buf)
>>{
>> -return sprintf(buf, "%i\n", 100); /* pwm uses percent-based fan-control 
>> */
>> +return sprintf(buf, "%i\n", 255);
>>}

This is not a standard name as it is not documented in 
https://www.kernel.org/doc/Documentation/hwmon/sysfs-interface

Apparently, the pwm value should always have been exposed as 0-255 and I 
screwed it up in nouveau by having it be 0-100...

Maybe we should ask pwm*_max to be defined so as new applications could 
do the right thing while not having nouveau change its ABI. I guess it 
is ok to change it for radeon as there are so few users currently but 
the interface has been in nouveau for multiple years already!



GSOC 2015 project mentor

2015-03-09 Thread Martin Peres
Hey everyone,

On 09/03/15 02:30, John Hunter wrote:
>
> On Sun, Mar 8, 2015 at 4:46 AM, Stefan Lance  <mailto:slance at uchicago.edu>> wrote:
>
> Hello,
>
> Who should I contact about / who is the mentor for the project
> titled "GL/GLSL tests for GL 4.0 and newer"?
>

I guess this would depend on what part you want to work on. Anything in 
mind already?

As a rule of thumb, look at what is missing in GL3.txt [0] and see if 
there is something you like there. Maybe tesselation shaders could 
require a substential amount of work.

Anyway, come back to us when you have selected some areas and then we 
can see if someone is interested to mentor those parts.

>
> Thanks,
> Stefan Lance
>
>
> Hi Stefan,
>
> I came across to contact to Martin Peres the last two days, I asked 
> him about the project "GL/GLSL tests for GL 4.0 and newer", he told me 
> that this was copy-pasted from last years idea.7

Right, thanks!

>
> Hope this can help you.
>
> John

Cheers,
Martin

[0] http://cgit.freedesktop.org/mesa/mesa/tree/docs/GL3.txt
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.freedesktop.org/archives/dri-devel/attachments/20150309/b084563a/attachment.html>


GSoC 2015: The X.Org Foundation has been accepted as a mentoring organisation

2015-03-02 Thread Martin Peres
Hello,

I have the pleasure to announce you that the X.Org Foundation has been 
accepted as a mentoring organisation for the Google Summer of Code 2015. 
This means that your project can receive students again this year!

Here are the important dates coming[0]:
- 16 March: Student application period opens.
- 27 March: Student application deadline.
- 24 April: All mentors must be signed up and all student proposals 
matched with a mentor
- 27 April: Accepted student proposals announced on the Google Summer of 
Code 2015 site

Potential mentors:
Please put up your project ideas on our ideas page[1] by the end of the 
week to get the maximum chance of finding the best student possible. In 
any case, you are required to put your project up before March 16th.

Students:
Consult our ideas page[1] or come up with your own project idea. In any 
case, apply as soon as possible, take contact with potential mentors and 
ask them for advices on how to improve your chances of being accepted. 
This year, we are going to be a bit stricter than usual when it comes to 
selecting students. You thus have until April 23rd to get a patch 
accepted upstream to show us you can work in the open source community 
and with the usual tools (git, gcc, autotools, cmake, ...). An idea 
could be for you to pick a simple bug from the project's bugzilla and 
fix it. If you already have submitted patches to your project of choice 
or a closely-related project, you do not have to do anything. We however 
would still advice you to show your interest by providing patches or 
participating to our discussions.

Finally, I would like to thank Google again this year for giving us the 
opportunity to get new blood to the projects under the X.Org 
Foundation's umbrella!

Martin Peres, on behalf of the board of directors

[0] https://www.google-melange.com/gsoc/events/google/gsoc2015
[1] http://www.x.org/wiki/SummerOfCodeIdeas/


[PATCH] drm: fix a typo in a comment

2014-12-09 Thread Martin Peres
Spotted while reviewing the DRM changes in Linux 3.18 for LinuxFR.

Cc: Ville Syrjälä 
Signed--off-by: Martin Peres 
---
 drivers/gpu/drm/drm_irq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 0e47df4..f5a5f18 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -166,7 +166,7 @@ static void vblank_disable_and_save(struct drm_device *dev, 
int crtc)
spin_lock_irqsave(>vblank_time_lock, irqflags);

/*
-* If the vblank interrupt was already disbled update the count
+* If the vblank interrupt was already disabled update the count
 * and timestamp to maintain the appearance that the counter
 * has been ticking all along until this time. This makes the
 * count account for the entire time between drm_vblank_on() and
-- 
2.1.3



[PATCH] drm: fix a word repetition in a comment

2014-12-09 Thread Martin Peres
Spotted while reviewing the DRM changes in Linux 3.18 for LinuxFR.

CC: Daniel Vetter 
Signed-off-by: Martin Peres 
---
 drivers/gpu/drm/drm_crtc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index 4a44f89..90ad762 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -3454,7 +3454,7 @@ void drm_fb_release(struct drm_file *priv)

/*
 * When the file gets released that means no one else can access the fb
-* list any more, so no need to grab fpriv->fbs_lock. And we need to to
+* list any more, so no need to grab fpriv->fbs_lock. And we need to
 * avoid upsetting lockdep since the universal cursor code adds a
 * framebuffer while holding mutex locks.
 *
-- 
2.1.3



XDC2014: Call for paper

2014-09-08 Thread Martin Peres
Le 02/05/2014 00:52, Martin Peres a ?crit :
> Hello,
>
> I have the pleasure to announce that the X.org Developer Conference 2014
> will
> be held in Bordeaux, France from October 8th to October 10th. The venue is
> located in the campus of the University of Bordeaux 1, in the computer
> science
> research lab called LaBRI.
>
> The official page for the event is http://www.x.org/wiki/Events/XDC2014
> while the call for paper is at http://www.x.org/wiki/Other/Press/CFP2014/
>
> As usual, we are open to talks across the layers of the graphics stack,
> from
> the kernel to desktop environments / graphical applications and about how
> to make things better for the developers who build them. If you're not sure
> if something might fit, mail me or add it to the ideas list found in the
> program page.
>
> The conference is free of charge and opened to the general public. If
> you plan
> on coming, please add yourself to the attendees list. We'll use this
> list to
> make badges and plan for the catering.
>
> I am looking forward to seeing you there, if you have any
> inquiries/questions,
> please send them to me (please also CC: board at foundation.x.org).
>
> Martin Peres

Two days left for submitting your talk proposal for the XDC2014.

After this date, the remaining slots will be attributed on a first-come, 
first-served basis.


[XDC2014] Travel sponsorship

2014-08-22 Thread Martin Peres
Hi everyone,

If you are willing to give a talk at XDC2014 and require travel 
sponsorship, please send an email to board at foundation.x.org (CC: 
martin.peres at free.fr) with an estimate of the travel + accommodation 
cost and the abstract of your talk.

Please send us your applications as soon as possible and do so no later 
than September 5th!

Thanks,
Martin Peres - On behalf of the board of directors


[PATCH 0/3] drm/gk20a: support for reclocking

2014-07-11 Thread Martin Peres
On 11/07/2014 03:42, Alexandre Courbot wrote:
> On 07/10/2014 06:50 PM, Mikko Perttunen wrote:
>> Does GK20A itself have any kind of thermal protection capabilities?
>> Upstream SOCTHERM support is not yet available (though I have a driver
>> in my tree), so we are thinking of disabling CPU DVFS on boards that
>> don't have always-on active cooling for now. Same might be necessary for
>> GPU as well.
>
> There is a small thermal driver ( 
> https://android.googlesource.com/kernel/tegra/+/b445e5296764d18861a6450f6851f25b9ca59dee/drivers/video/tegra/host/gk20a/therm_gk20a.c
>  
> ) but it doesn't seem to do much. I believe that for Tegra we rely in 
> SOCTHERM instead, but maybe Ken could confirm?

Unless it changed a great deal, I reverse engineered most of the 
registers in this area (for the nv50), but some stuff didn't change that 
much and could be used straight away (temperature reading, sending IRQs 
on thresholds, auto downclocking when overheating). We are not really 
using those features on nouveau right now, we just poll on the 
temperature every second.

If we are moving to using the PMU for thermal monitoring, we can do the 
polling there and warn the userspace when the temperature is too high or 
if performance is insufficient / too much. I have part of the code for 
performance counters in PMU, it's dead simple.

Martin


unparseable, undocumented /sys/class/drm/.../pstate

2014-06-23 Thread Martin Peres
Le 23/06/2014 19:56, Ilia Mirkin a ?crit :
> On Mon, Jun 23, 2014 at 1:46 PM, Martin Peres  wrote:
>> Le 23/06/2014 18:40, Ilia Mirkin a ?crit :
>>>
>>> On Mon, Jun 23, 2014 at 12:36 PM, Greg KH  wrote:
>>>>
>>>> On Mon, Jun 23, 2014 at 12:18:51PM -0400, Ilia Mirkin wrote:
>>>> A list of valid "values" that a file can be in is fine if you just then
>>>> write one value back to that file.  That's the one exception, but a
>>>> minor one given the huge number of sysfs files.  Other than that, if you
>>>
>>>
>>> Which is pretty much what the pstate file is. Would it make things
>>> better if we removed the descriptive info while leaving the pstate
>>> file in place?
>>
>>
>> This means we should also create a new sysfs file per performance level too,
>> right? Is there another way for a driver to expose a list in sysfs?
>>
>> Since NVIDIA gives different names to performance levels depending on the
>> card family, we may need to abstract the name away in order to provide some
>> consistency and make listing performance levels easier from a program (may
>> it use readdir() or stat()).
>>
>> Moving the file to debugfs would "fix" the one-value-per-file rule but it
>> would also require users to mount debugfs at boot time in order to write the
>> default configuration they want for PM instead of just changing
>> /etc/sysctl.d/nouveau.conf... On the other hand, I'm not sure we can commit
>> on having a stable ABI on the way we display clocks (unless people take them
>> as a single value and do not try to parse them) as new hardware will alter
>> the semantics of each clock domain, if not drop/split some of them!
>>
>> Whatever we do, it doesn't look like we can find a nice solution that fits
>> every use cases unless we write a userspace program to access this data, but
>> this seems highly overkill...
>
> I was thinking just having the list of level ids in the pstate file,
> and then stick the current file into debugfs. That way people retain
> the ability to see things, as well as use pstate directly for a
> configured system.

In this case, would we still use the * to indicate the current perflvl?

Also, are we supposed to output the current perflvl or the current 
configuration in use? Right now, we configure it to either auto (WIP), 
perflvl X at all time or perflvl X when on battery and Y when on sector.
However, when we read pstate, we only get the current perflvl if my 
memory serves me right. Maybe we should create a r-o file that outputs 
the current perflvl and keep pstate for storing the configuration.



unparseable, undocumented /sys/class/drm/.../pstate

2014-06-23 Thread Martin Peres
Le 23/06/2014 18:40, Ilia Mirkin a ?crit :
> On Mon, Jun 23, 2014 at 12:36 PM, Greg KH  wrote:
>> On Mon, Jun 23, 2014 at 12:18:51PM -0400, Ilia Mirkin wrote:
>> A list of valid "values" that a file can be in is fine if you just then
>> write one value back to that file.  That's the one exception, but a
>> minor one given the huge number of sysfs files.  Other than that, if you
>
> Which is pretty much what the pstate file is. Would it make things
> better if we removed the descriptive info while leaving the pstate
> file in place?

This means we should also create a new sysfs file per performance level 
too, right? Is there another way for a driver to expose a list in sysfs?

Since NVIDIA gives different names to performance levels depending on 
the card family, we may need to abstract the name away in order to 
provide some consistency and make listing performance levels easier from 
a program (may it use readdir() or stat()).

Moving the file to debugfs would "fix" the one-value-per-file rule but 
it would also require users to mount debugfs at boot time in order to 
write the default configuration they want for PM instead of just 
changing /etc/sysctl.d/nouveau.conf... On the other hand, I'm not sure 
we can commit on having a stable ABI on the way we display clocks 
(unless people take them as a single value and do not try to parse them) 
as new hardware will alter the semantics of each clock domain, if not 
drop/split some of them!

Whatever we do, it doesn't look like we can find a nice solution that 
fits every use cases unless we write a userspace program to access this 
data, but this seems highly overkill...


XDC2014: Call for paper

2014-05-02 Thread Martin Peres
Hello,

I have the pleasure to announce that the X.org Developer Conference 2014 
will
be held in Bordeaux, France from October 8th to October 10th. The venue is
located in the campus of the University of Bordeaux 1, in the computer 
science
research lab called LaBRI.

The official page for the event is http://www.x.org/wiki/Events/XDC2014
while the call for paper is at http://www.x.org/wiki/Other/Press/CFP2014/

As usual, we are open to talks across the layers of the graphics stack, from
the kernel to desktop environments / graphical applications and about how
to make things better for the developers who build them. If you're not sure
if something might fit, mail me or add it to the ideas list found in the
program page.

The conference is free of charge and opened to the general public. If 
you plan
on coming, please add yourself to the attendees list. We'll use this list to
make badges and plan for the catering.

I am looking forward to seeing you there, if you have any 
inquiries/questions,
please send them to me (please also CC: board at foundation.x.org).

Martin Peres


DRM security flaws and security levels.

2014-04-14 Thread Martin Peres
On 14/04/2014 15:09, Rob Clark wrote:
> On Mon, Apr 14, 2014 at 8:56 AM, Thomas Hellstrom  
> wrote:
>> On 04/14/2014 02:41 PM, One Thousand Gnomes wrote:
 throw out all GPU memory on master drop and block ioctls requiring
 authentication until master becomes active again.
>>> If you have a per driver method then the driver can implement whatever is
>>> optimal (possibly including throwing it all out).
>>>
 -1: The driver allows an authenticated client to craft command streams
 that could access any part of system memory. These drivers should be
 kept in staging until they are fixed.
>>> I am not sure they belong in staging even.
>>>
 0: Drivers that are vulnerable to any of the above scenarios.
 1: Drivers that are immune against all above scenarios but allows any
 authenticated client with *active* master to access all GPU memory. Any
 enabled render nodes will be insecure, while primary nodes are secure.
 2: Drivers that are immune against all above scenarios and can protect
 clients from accessing eachother's gpu memory:
 Render nodes will be secure.

 Thoughts?
>>> Another magic number to read, another case to get wrong where the OS
>>> isn't providing security by default.
>>>
>>> If the driver can be fixed to handle it by flushing out all GPU memory
>>> then the driver should be fixed to do so. Adding magic udev nodes is just
>>> adding complexity that ought to be made to go away before it even becomes
>>> an API.
>>>
>>> So I think there are three cases
>>>
>>> - insecure junk driver. Shouldn't even be in staging
>>> - hardware isn't as smart enough, or perhaps has a performance problem so
>>>sometimes flushes all buffers away on a switch
>>> - drivers that behave well
>>>
>>> Do you then even need a sysfs node and udev hacks (remembering not
>>> everyone even deploys udev on their Linux based products)
>>>
>>> For the other cases
>>>
>>> - how prevalent are the problem older user space drivers nowdays ?
>>>
>>> - the fix for "won't fix" drivers is to move them to staging, and then
>>>if they are not fixed or do not acquire a new maintainer who will,
>>>delete them.
>>>
>>> - if we have 'can't fix drivers' then its a bit different and we need to
>>>understand better *why*.
>>>
>>> Don't screw the kernel up because there are people who can't be bothered
>>> to fix bugs. Moving them out of the tree is a great incentive to find
>>> someone to fix it.
>>>
>> On second thought I'm dropping this whole issue.
>> I've brought this and other security issues up before but nobody really
>> seems to care.
> I wouldn't say that.. render-nodes, dri3/prime/dmabuf, etc, wouldn't
> exist if we weren't trying to solve these issues.
>
> Like I said earlier, I think we do want some way to expose range of
> supported security levels, and in case multiple levels are supported
> by driver some way to configure desired level.
>
> Well, "range" may be overkill, I only see two sensible values, either
> "gpu can access anyone's gpu memory (but not arbitrary system
> memory)", or "we can also do per-process isolation of gpu buffers".
> Of course the "I am a root hole" security level has no place in the
> kernel.
>
> BR,
> -R
I indeed think having a standard way of exposing how much security
can be expected from the hw/driver is a good thing!

I have never worked on trying to identify the performance hit of using
per-process virtual address space but I would expect it to be low-enough
since the MMU cannot pagefault (at least, doing so on NVIDIA hw means
killing the context). Maybe the performance hit would be at context 
switching
(because the TLB would be reset).

I am interested in knowing the performance impact PPGTT on Intel IGPs and if
it could be activated on a per-process basis. Of course, applications 
ran without
PPGTT should be trusted by the user as they will be able to access other
process' BOs.

If the performance impact is high AND it can be deactivated per-process, 
then it
may make sense to allow libdrm to allocate this privileged access to the 
GPU.

However, how will we avoid applications from requesting performance all 
the time?
By-passing the PPGTT security should require proof of the user's intent 
and right now,
we do not have this capability on current desktop environments (although 
it is worked on
[1] and [2]). Do you have any idea how to expose this knob securely? 
Root could disable
PPGTT for all processes, but I don't see how we should securely handle 
the authorisation
for an application to disable the PPGTT without some serious work...

As you can see, there is some work to be done before exposing the 
security/performance
trade-off knob. I am not convinced it is necessary but I would 
definitely reconsider my
position when data show up giving us the performance impact of the 
graphics MMU.

However, I would really appreciate if drivers could expose the graphic 
process isolation
level. I do not think we should go for a 

nouveau_fan_update: possible circular locking dependency detected

2014-03-13 Thread Martin Peres
Le 13/03/2014 14:38, Ilia Mirkin a ?crit :
> On Sun, Mar 9, 2014 at 10:51 AM, Marcin Slusarz
>  wrote:
>> [  326.168487] ==
>> [  326.168491] [ INFO: possible circular locking dependency detected ]
>> [  326.168496] 3.13.6 #1270 Not tainted
>> [  326.168500] ---
>> [  326.168504] ldconfig/22297 is trying to acquire lock:
>> [  326.168507]  (&(>fan->lock)->rlock){-.-...}, at: 
>> [] nouveau_fan_update+0xeb/0x252 [nouveau]
>> [  326.168551]
>> but task is already holding lock:
>> [  326.168555]  (&(>sensor.alarm_program_lock)->rlock){-.-...}, at: 
>> [] alarm_timer_callback+0xf1/0x179 [nouveau]
>> [  326.168587]
>> which lock already depends on the new lock.
>>
>> [  326.168592]
>> the existing dependency chain (in reverse order) is:
>> [  326.168596]
>> -> #1 (&(>sensor.alarm_program_lock)->rlock){-.-...}:
>> [  326.168606][] lock_acquire+0xce/0x117
>> [  326.168615][] _raw_spin_lock_irqsave+0x3f/0x51
>> [  326.168623][] alarm_timer_callback+0xf1/0x179 
>> [nouveau]
>> [  326.168651][] 
>> nv04_timer_alarm_trigger+0x1b1/0x1cb [nouveau]
>> [  326.168679][] nv04_timer_alarm+0xb5/0xbe 
>> [nouveau]
>> [  326.168708][] nouveau_fan_update+0x234/0x252 
>> [nouveau]
>> [  326.168735][] nouveau_fan_alarm+0x15/0x17 
>> [nouveau]
>> [  326.168763][] 
>> nv04_timer_alarm_trigger+0x1b1/0x1cb [nouveau]
>> [  326.168790][] nv04_timer_intr+0x5b/0x13c 
>> [nouveau]
>> [  326.168817][] nouveau_mc_intr+0x2e2/0x3b1 
>> [nouveau]
>> [  326.168838][] handle_irq_event_percpu+0x5c/0x1dc
>> [  326.168846][] handle_irq_event+0x3c/0x5c
>> [  326.168852][] handle_edge_irq+0xc4/0xeb
>> [  326.168860][] handle_irq+0x120/0x12d
>> [  326.168868][] do_IRQ+0x48/0xaf
>> [  326.168873][] ret_from_intr+0x0/0x13
>> [  326.168881][] arch_cpu_idle+0x13/0x1d
>> [  326.168887][] cpu_startup_entry+0x140/0x218
>> [  326.168895][] start_secondary+0x1bf/0x1c4
>> [  326.168902]
>> -> #0 (&(>fan->lock)->rlock){-.-...}:
>> [  326.168913][] __lock_acquire+0x10be/0x182b
>> [  326.168920][] lock_acquire+0xce/0x117
>> [  326.168924][] _raw_spin_lock_irqsave+0x3f/0x51
>> [  326.168931][] nouveau_fan_update+0xeb/0x252 
>> [nouveau]
>> [  326.168958][] nouveau_therm_fan_set+0x14/0x16 
>> [nouveau]
>> [  326.168984][] nouveau_therm_update+0x303/0x312 
>> [nouveau]
>> [  326.169011][] nouveau_therm_alarm+0x13/0x15 
>> [nouveau]
>> [  326.169038][] 
>> nv04_timer_alarm_trigger+0x1b1/0x1cb [nouveau]
>> [  326.169059][] nv04_timer_alarm+0xb5/0xbe 
>> [nouveau]
>> [  326.169079][] alarm_timer_callback+0x15e/0x179 
>> [nouveau]
>> [  326.169101][] 
>> nv04_timer_alarm_trigger+0x1b1/0x1cb [nouveau]
>> [  326.169121][] nv04_timer_intr+0x5b/0x13c 
>> [nouveau]
>> [  326.169142][] nouveau_mc_intr+0x2e2/0x3b1 
>> [nouveau]
>> [  326.169160][] handle_irq_event_percpu+0x5c/0x1dc
>> [  326.169165][] handle_irq_event+0x3c/0x5c
>> [  326.169170][] handle_edge_irq+0xc4/0xeb
>> [  326.169175][] handle_irq+0x120/0x12d
>> [  326.169179][] do_IRQ+0x48/0xaf
>> [  326.169183][] ret_from_intr+0x0/0x13
>> [  326.169189]
>> other info that might help us debug this:
>>
>> [  326.169193]  Possible unsafe locking scenario:
>>
>> [  326.169195]CPU0CPU1
>> [  326.169197]
>> [  326.169199]   lock(&(>sensor.alarm_program_lock)->rlock);
>> [  326.169205]
>> lock(&(>fan->lock)->rlock);
>> [  326.169211]
>> lock(&(>sensor.alarm_program_lock)->rlock);
>> [  326.169216]   lock(&(>fan->lock)->rlock);
>> [  326.169221]
>>   *** DEADLOCK ***
>>
>>   [  326.169225] 1 lock held by ldconfig/22297:
>>   [  326.169229]  #0:  (&(>sensor.alarm_program_lock)->rlock){-.-...}, 
>> at: [] alarm_timer_callback+0xf1/0x179 [nouveau]
>>   [  326.169253]
>>   stack backtrace:
>>   [  326.169258] CPU: 7 PID: 22297 Comm: ldconfig Not tainted 3.13.6 #1270
>>   [  326.169260] Hardware name: System manufacturer System Product Name/P6T 
>> SE, BIOS 060309/02/2009
>>   [  326.169264]  90fb6360 8801bfdc3a38 9059e369 
>> 0006
>>   [  326.169273]  90fb61b0 8801bfdc3a88 905998cf 
>> 0002
>>   [  326.169282]  8800b148dbe0 0001 8800b148e1e0 
>> 0001
>>   [  326.169342] Call Trace:
>>   [  326.169344][] dump_stack+0x4e/0x71
>>   [  326.169352]  [] print_circular_bug+0x2ad/0x2be
>>   [  326.169356]  [] __lock_acquire+0x10be/0x182b
>>   [  326.169360]  [] ? check_irq_usage+0x99/0xab
>>   [  326.169365]  [] lock_acquire+0xce/0x117
>>   [  326.169384]  [] ? nouveau_fan_update+0xeb/0x252 
>> [nouveau]
>>   [ 

The X.Org Foundation is an accepted Org for the GSoC 2014!

2014-02-25 Thread Martin Peres
Hey, open source graphics enthusiasts,

Yesterday, Google announced that the X.Org foundation is an accepted 
organization for the GSoC 2014!

If you are a potential mentor and would have a project you would like a 
student to work on, you can propose a project in the summer of code 
ideas wiki page[1]. Right now, we have 16 projects ideas, half of them 
being related to Xpra.

If you are a student and are interested in a project covered by the 
X.Org Foundation (DRI, mesa, X-Server, Wayland and 2D/3D/video accel), 
you can send your project proposal to Google[2]. Bonus points if you 
find a mentor yourself. You can also have a look at the summer of code 
ideas wiki page[1] for interesting projects.

Looking forward to seeing which projects will happen for this 2014 edition!

Martin Peres

[1] http://www.x.org/wiki/SummerOfCodeIdeas/
[2] https://www.google-melange.com/gsoc/homepage/google/gsoc2014


The X.Org Foundation is an accepted for the GSoC 2014!

2014-02-25 Thread Martin Peres
Hey, open source graphics enthusiasts,

Yesterday, Google announced that the X.Org foundation is an accepted 
organization for the GSoC 2014!

If you are a potential mentor and would have a project you would like a 
student to work on, you can propose a project in the summer of code 
ideas wiki page[1]. Right now, we have 16 projects ideas, half of them 
being related to Xpra.

If you are a student and are interested in a project covered by the 
X.Org Foundation (DRI, mesa, X-Server, Wayland and 2D/3D/video accel), 
you can send your project proposal to Google[2]. Bonus points if you 
find a mentor yourself. You can also have a look at the summer of code 
ideas wiki page[1] for interesting projects.

Looking forward to seeing which projects will happen for this 2014 edition!

Martin Peres

[1] http://www.x.org/wiki/SummerOfCodeIdeas/
[2] https://www.google-melange.com/gsoc/homepage/google/gsoc2014


--
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071=/4140/ostg.clktrk
--
___
Dri-devel mailing list
Dri-devel at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[GSoC2014] Call for projects ideas and mentors

2014-02-06 Thread Martin Peres
Hi, fellow graphics stack developers,

Now that FOSDEM is over, it is time to think about the
Google Summer of Code 2014!

If you would like to propose a project for the GSoC 2014, please
write your proposals at [1], before the 14th of February, in order
to increase our chances of being an accepted organization.

If you simply would potentially be interested in being a mentor,
please also contact me as Google wants to have an estimate of
the number of potential mentors we would have.

If you have any questions regarding the GSoC or need any additional
information, please answer in this thread.

Cheers,
Martin

[1] http://xorg.freedesktop.org/wiki/SummerOfCodeIdeas/


Nominations for X.Org Foundation Board of Directors are OPEN

2014-01-13 Thread Martin Peres
We are seeking nominations for candidates for election to the X.Org
Foundation Board of Directors.  All X.Org Foundation members are
eligible for election to the board.

Nominations for the 2014 election are now open and will remain open
until 23.59 GMT on 12 February 2013.

The Board consists of directors elected from the membership.  Each
year, an election is held to bring the total number of directors to
eight.  The four members receiving the highest vote totals will serve
as directors for two year terms.

The directors who received two year terms starting in 2013 were
Alan Coopersmith, Martin Peres, Peter Hutterer and Stuart Kreitman. 
They will continue to serve until their term ends in 2015.  Current
directors whose term expires in 2014 are Matthias Hopf, Keith Packard, 
Matt Dew, and Alex Deucher.

A director is expected to participate in the bi-weekly IRC meeting to
discuss current business and to attend the annual meeting of the X.Org
Foundation, which will be held at a location determined in advance by
the Board of Directors.

A member may nominate themselves or any other member they feel is
qualified.  Nominations should be sent to the Election Committee at
elections at x.org.

Nominees shall be required to be current members of the X.Org
Foundation, and submit a  personal statement of up to 200 words that
will be provided to prospective voters.  The collected statements, along 
with the statement of contribution to the X.Org Foundation in
the members account page on http://members.x.org, will be made available 
to all voters to help them make their voting decisions.

Nominations, membership applications or renewals and completed personal 
statements must be received no later than 23.59 GMT on 12 February 2014.

The slate of candidates will be published 13 February 2014 and candidate 
Q will begin then.   The deadline for Xorg membership applications and 
renewals is 18 February 2014.

The Election Committee
X.Org Foundation


Rendering when dropped master

2013-12-20 Thread Martin Peres
On 20/12/2013 10:55, Thomas Hellstrom wrote:
> On 12/20/2013 10:31 AM, Martin Peres wrote:
>> On 20/12/2013 07:57, Thomas Hellstrom wrote:
>>> So this is a potential issue that needs to be brought up sooner or
>>> later:
>>>
>>> Let's say a client is authenticated by the current master.
>>> Then the master drops, and we have a new master (fast user switching for
>>> example).
>>>
>>> What's the status of the clients authenticated by old masters?
>>> Should they be allowed to render and use memory resources or
>>> shouldn't they?
>>>
>>> A typical example where this could pose a problem is where user 1 opens
>>> a drm connection, authenticates itself and then drops master.
>>> Then user 2 starts an X server and exposes all DRI contents to user 1?
>>>
>>> /Thomas
>> I wouldn't worry about that since all clients should use render nodes
>> instead.
>> If you worry about this, help making the switch to them happen.
>>
> OK, so let's say user 1 opens a connection through a render node and
> starts rendering using shared buffers.
> Then we do a fast user switch, the render node ACL is updated and user 2
> logs in.
> What's stopping user 2 from accessing user 1's DRI content?
>
> I haven't looked closely at what's actually allowed through render
> nodes; perhaps buffer sharing using global names isn't?

That's right. GEM buffer sharing is disabled in render nodes.
Only DMA-Buf is allowed.

GEM buffer sharing could hardly be less secure, that's why it was
decided to drop it entirely in favour of dma-buf.

Martin


Rendering when dropped master

2013-12-20 Thread Martin Peres
On 20/12/2013 07:57, Thomas Hellstrom wrote:
> So this is a potential issue that needs to be brought up sooner or later:
>
> Let's say a client is authenticated by the current master.
> Then the master drops, and we have a new master (fast user switching for
> example).
>
> What's the status of the clients authenticated by old masters?
> Should they be allowed to render and use memory resources or shouldn't they?
>
> A typical example where this could pose a problem is where user 1 opens
> a drm connection, authenticates itself and then drops master.
> Then user 2 starts an X server and exposes all DRI contents to user 1?
>
> /Thomas
I wouldn't worry about that since all clients should use render nodes 
instead.
If you worry about this, help making the switch to them happen.

Martin/mupuf


[Mesa-dev] rules for merging patches to libdrm

2013-11-19 Thread Martin Peres
Le 19/11/2013 16:04, Christian K?nig a ?crit :
> So I think the very first step should be to publish everything on the 
> appropriate lists, and not try an approach like releasing the kernel 
> code first and waiting for it to show up upstream and then try to 
> release the userspace code build on top of it.

Sure, that's why we have private repos and mailing lists.

Steps would be:
- Make a mergeable working prototype you are satisfied with
- Publish all the parts as private repos
- Request for comments and go to step 0 until everyone is satisfied
- Merge it in the kernel, then libdrm, then ddx and mesa

This is pretty much what everyone does already, so I agree with Jerome 
here, No need for a maintainer here as long as people remember that. 
Thanks Dave for spotting this bad behaviour and reminding us of the rule.

Martin


Re: [PATCH] dma-buf: Expose buffer size to userspace (v2)

2013-09-04 Thread Martin Peres

Hi Christopher,
Le 04/09/2013 05:15, Christopher James Halse Rogers a écrit :

Each dma-buf has an associated size and it's reasonable for userspace
to want to know what it is.

Since userspace already has an fd, expose the size using the
size = lseek(fd, SEEK_END, 0); lseek(fd, SEEK_CUR, 0);
idiom.

v2: Added Daniel's sugeested documentation, with minor fixups

Signed-off-by: Christopher James Halse Rogers 
christopher.halse.rog...@canonical.com
Reviewed-by: Daniel Vetter daniel.vet...@ffwll.ch
Tested-by: Daniel Vetter daniel.vet...@ffwll.ch
---
  Documentation/dma-buf-sharing.txt | 12 
  drivers/base/dma-buf.c| 28 
  2 files changed, 40 insertions(+)

diff --git a/Documentation/dma-buf-sharing.txt 
b/Documentation/dma-buf-sharing.txt
index 0b23261..849e982 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -407,6 +407,18 @@ Being able to mmap an export dma-buf buffer object has 2 
main use-cases:
 interesting ways depending upong the exporter (if userspace starts 
depending
 upon this implicit synchronization).
  
+Other Interfaces Exposed to Userspace on the dma-buf FD

+--
+
+- Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
+  with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow
+  the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other
+  llseek operation will report -EINVAL.
+
+  If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all

Shouldn't it be supported instead of support?

Anyway, I'm just curious, in which case is it important to know the size?
Do we already have a way to get the dimensions (x, y and stripe)?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] dma-buf: Expose buffer size to userspace (v2)

2013-09-04 Thread Martin Peres

Le 04/09/2013 14:05, Daniel Vetter a écrit :

On Wed, Sep 04, 2013 at 12:24:27PM +0200, Martin Peres wrote:

Hi Christopher,
Le 04/09/2013 05:15, Christopher James Halse Rogers a écrit :

Each dma-buf has an associated size and it's reasonable for userspace
to want to know what it is.

Since userspace already has an fd, expose the size using the
size = lseek(fd, SEEK_END, 0); lseek(fd, SEEK_CUR, 0);
idiom.

v2: Added Daniel's sugeested documentation, with minor fixups

Signed-off-by: Christopher James Halse Rogers 
christopher.halse.rog...@canonical.com
Reviewed-by: Daniel Vetter daniel.vet...@ffwll.ch
Tested-by: Daniel Vetter daniel.vet...@ffwll.ch
---
  Documentation/dma-buf-sharing.txt | 12 
  drivers/base/dma-buf.c| 28 
  2 files changed, 40 insertions(+)

diff --git a/Documentation/dma-buf-sharing.txt 
b/Documentation/dma-buf-sharing.txt
index 0b23261..849e982 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -407,6 +407,18 @@ Being able to mmap an export dma-buf buffer object has 2 
main use-cases:
 interesting ways depending upong the exporter (if userspace starts 
depending
 upon this implicit synchronization).
+Other Interfaces Exposed to Userspace on the dma-buf FD
+--
+
+- Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
+  with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow
+  the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other
+  llseek operation will report -EINVAL.
+
+  If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all

Shouldn't it be supported instead of support?

Anyway, I'm just curious, in which case is it important to know the size?
Do we already have a way to get the dimensions (x, y and stripe)?

Size is an invariant part of a dma-buf. Everything else wrt metadata isn't
tracked in the kernel, so doesn't make much sense to expose it ;-)
-Daniel

ACK, thanks Daniel.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 0/6] DRM: VMA Access Management and Render Nodes

2013-08-25 Thread Martin Peres
On 25/08/2013 17:09, David Herrmann wrote:
> Hi
>
> On Fri, Aug 23, 2013 at 2:00 PM, Martin Peres  wrote:
>> Le 23/08/2013 13:13, David Herrmann a ?crit :
>>
>>> Hi
>>>
>>> I reduced the vma access-management patches to a minimum. I now do filp*
>>> tracking in gem unconditionally and force drm_gem_mmap() to check this.
>>> Hence,
>>> all gem drivers are safe now. For TTM drivers, I now use the already
>>> available
>>> verify_access() callback to get access to the underlying gem-object.
>>> Pretty
>>> simple.. Why hadn't I thought of that before?
>>>
>>> Long story short: All drivers using GEM are safe now. This leaves vmwgfx..
>>> But
>>> they do their own access-management, anyway.
>>
>> Great! Thanks! Have you checked they are really safe with my "exploits"?
>> I'll have another round of review even if it looked good the last time I
>> checked.
> Good you asked. I tested whether it works, I didn't actually verify
> that it correctly fails in case of exploits. And in fact there is a
> small bug (I return "1" instead of -EACCES, stupid verify_access()) so
> user-space gets a segfault accessing the mmap when trying to exploit
> this. That actually doesn't sound that bad, does it? ;)

Good to know I could contribute a little to your work. You're doing a 
great job!
> v2 is on its way.

Yep, saw it.
>
>>> The 3 patches on top implement render-nodes. I added a "drm_rnodes" module
>>> parameter to core drm. You need to pass "drm.rnodes=1" on the kernel
>>> command-line or via sysfs _before_ loading a driver. Otherwise, render
>>> nodes
>>> will not be created.
>>
>> By default, having the render nodes doesn't change the way the userspace
>> works at all. So, what is the point of protecting it behind a parameter?
>>
>> Is it to make it clear this isn't part of the API yet? I would say that as
>> long
>> as libdrm hasn't been updated, this isn't part of the API anyway.
> Hm, I wouldn't say so. Applications like weston and kmscon no longer
> use the legacy drmOpen() facility. They use udev+open(). So once it's
> upstream, it's part of the API regardless of libdrm. So the sole
> purpose of drm_rnodes is to mark it as "experimental".

Ah, I guess I'll have to have a look at this. I basically got preempted
from adding render node support to Weston and I didn't take the time
to check it again, the vma patches were more important first. Thanks
for saving me a giant headache with GEM/TTM, I spent two week ends
trying to track a leak for radeon cards.
>>> This allows us to test render-nodes and play with the API. I added FLINK
>>> for
>>> now so we can better test it. Not sure whether we should allow it in the
>>> end,
>>> though.
>>
>>  From a security point of view, I don't think we should keep it as
>> applications shouldn't
>> be trusted for not doing stupid things (because of code injection). So,
>> unless we
>> plan on adding access control to flink via LSM, we shouldn't expose the
>> feature
>> on render nodes.
> This is also what I think. We have a chance to get rid of all legacy
> stuff, so maybe we should just drop it all.

Great!
>
>>  From a dev point of view, keeping it means that the XServer doesn't
>> have to know whether mesa supports render nodes or not. This is because
>> the authentication dance isn't available on render nodes so the x-server
>> has to tell mesa if it should authenticate or not. The other solution is to
>> allow
>> the authentication ioctls on render nodes and just return 0 if this is a
>> render node.
>> This way, we won't need any modification in mesa/xserver/dri2proto to pass
>> the information that no authentication is needed. In this solution, only
>> libdrm and
>> the ddx should be modified to make use of the render node. That's not how I
>> did it on my render node patchset, can't remember why...
>>
>> What do you guys think?
> We discussed that a bit on IRC. Of course, we can add a lot of
> wrappers and workarounds. We can make all the drmAuth stuff *just
> work*. But that means, we keep all the legacy. As said, we have the
> chance to introduce a new API and drop all the legacy. I think it is
> worth a shot. And we also notice quite fast which user-space programs
> need some rework.

Well, if by "all the legacy", you mean the authentication-related 
functions then yes.

How do you plan on handling the case where the ddx has been updated and 
passes
the render node to a not-yet-updated mesa? Mesa will try to authenticate 
an

Re: [PATCH v2 0/6] DRM: VMA Access Management and Render Nodes

2013-08-25 Thread Martin Peres

On 25/08/2013 17:09, David Herrmann wrote:

Hi

On Fri, Aug 23, 2013 at 2:00 PM, Martin Peres martin.pe...@free.fr wrote:

Le 23/08/2013 13:13, David Herrmann a écrit :


Hi

I reduced the vma access-management patches to a minimum. I now do filp*
tracking in gem unconditionally and force drm_gem_mmap() to check this.
Hence,
all gem drivers are safe now. For TTM drivers, I now use the already
available
verify_access() callback to get access to the underlying gem-object.
Pretty
simple.. Why hadn't I thought of that before?

Long story short: All drivers using GEM are safe now. This leaves vmwgfx..
But
they do their own access-management, anyway.


Great! Thanks! Have you checked they are really safe with my exploits?
I'll have another round of review even if it looked good the last time I
checked.

Good you asked. I tested whether it works, I didn't actually verify
that it correctly fails in case of exploits. And in fact there is a
small bug (I return 1 instead of -EACCES, stupid verify_access()) so
user-space gets a segfault accessing the mmap when trying to exploit
this. That actually doesn't sound that bad, does it? ;)


Good to know I could contribute a little to your work. You're doing a 
great job!

v2 is on its way.


Yep, saw it.



The 3 patches on top implement render-nodes. I added a drm_rnodes module
parameter to core drm. You need to pass drm.rnodes=1 on the kernel
command-line or via sysfs _before_ loading a driver. Otherwise, render
nodes
will not be created.


By default, having the render nodes doesn't change the way the userspace
works at all. So, what is the point of protecting it behind a parameter?

Is it to make it clear this isn't part of the API yet? I would say that as
long
as libdrm hasn't been updated, this isn't part of the API anyway.

Hm, I wouldn't say so. Applications like weston and kmscon no longer
use the legacy drmOpen() facility. They use udev+open(). So once it's
upstream, it's part of the API regardless of libdrm. So the sole
purpose of drm_rnodes is to mark it as experimental.


Ah, I guess I'll have to have a look at this. I basically got preempted
from adding render node support to Weston and I didn't take the time
to check it again, the vma patches were more important first. Thanks
for saving me a giant headache with GEM/TTM, I spent two week ends
trying to track a leak for radeon cards.

This allows us to test render-nodes and play with the API. I added FLINK
for
now so we can better test it. Not sure whether we should allow it in the
end,
though.


 From a security point of view, I don't think we should keep it as
applications shouldn't
be trusted for not doing stupid things (because of code injection). So,
unless we
plan on adding access control to flink via LSM, we shouldn't expose the
feature
on render nodes.

This is also what I think. We have a chance to get rid of all legacy
stuff, so maybe we should just drop it all.


Great!



 From a dev point of view, keeping it means that the XServer doesn't
have to know whether mesa supports render nodes or not. This is because
the authentication dance isn't available on render nodes so the x-server
has to tell mesa if it should authenticate or not. The other solution is to
allow
the authentication ioctls on render nodes and just return 0 if this is a
render node.
This way, we won't need any modification in mesa/xserver/dri2proto to pass
the information that no authentication is needed. In this solution, only
libdrm and
the ddx should be modified to make use of the render node. That's not how I
did it on my render node patchset, can't remember why...

What do you guys think?

We discussed that a bit on IRC. Of course, we can add a lot of
wrappers and workarounds. We can make all the drmAuth stuff *just
work*. But that means, we keep all the legacy. As said, we have the
chance to introduce a new API and drop all the legacy. I think it is
worth a shot. And we also notice quite fast which user-space programs
need some rework.


Well, if by all the legacy, you mean the authentication-related 
functions then yes.


How do you plan on handling the case where the ddx has been updated and 
passes
the render node to a not-yet-updated mesa? Mesa will try to authenticate 
and it will fail.


Keeping the authentication IOCTLs seem to me like a lesser evil, 
especially since they

would basically do nothing.



Maybe we can get this into 3.11?


As long as we don't have to keep the interface stable (I don't want to
expose flink
on render nodes), I'm all for pushing the code now. Otherwise, a kernel
branch
somewhere is sufficient.

Do you plan on checking my userspace patches too? Those are enough to make
use
of the render nodes on X, but I haven't tested that all the combinations of
version
would still work. The libdrm work should be quite solid though (there are
even libdrm
tests for the added functionalities :)).

I plan on having a working user-space for XDC. Most of your patches
can be copied unchanged indeed. But servers

  1   2   3   >