Quoting Bjorn Andersson (2021-08-25 16:42:29)
> As the Qualcomm DisplayPort driver only supports a single instance of
> the driver the commonly used struct dp_display is kept in a global
> variable. As we introduce additional instances this obviously doesn't
> work.
>
> Replace this with a combinat
Pinned contexts, like the migrate contexts need reset after resume
since their context image may have been lost. Also the GuC needs to
register pinned contexts.
Add a list to struct intel_engine_cs where we add all pinned contexts on
creation, and traverse that list at __engine_unpark() time to re
From: Guangming Cao
When mapping the memory represented by a dma-buf into a device's
address space, it might be desireable to map the memory with
certain DMA attributes. Thus, introduce the dma_mapping_attrs
field in the dma_buf_attachment structure so that when
the memory is mapped with dma_buf_
On 2021.08.20 12:56:34 -0700, Luis Chamberlain wrote:
> On Fri, Aug 20, 2021 at 04:17:24PM +0200, Christoph Hellwig wrote:
> > On Thu, Aug 19, 2021 at 04:29:29PM +0800, Zhenyu Wang wrote:
> > > I'm working on below patch to resolve this. But I met a weird issue in
> > > case when building i915 as m
On 2021.08.20 16:17:24 +0200, Christoph Hellwig wrote:
> On Thu, Aug 19, 2021 at 04:29:29PM +0800, Zhenyu Wang wrote:
> > I'm working on below patch to resolve this. But I met a weird issue in
> > case when building i915 as module and also kvmgt module, it caused
> > busy wait on request_module("kv
On 2021.08.19 17:43:43 +0300, Joonas Lahtinen wrote:
> Quoting Zhenyu Wang (2021-08-19 11:29:29)
> > On 2021.08.17 13:22:03 +0800, Zhenyu Wang wrote:
> > > > On 2021.08.16 19:34:58 +0200, Christoph Hellwig wrote:
> > > > > Any updates on this? I'd really hate to miss this merge window.
> > > >
>
On 8/22/21 6:20 PM, Jim Cromie wrote:
> DEFINE_DYNAMIC_DEBUG_CATEGORIES(name, var, bitmap_desc, @bit_descs)
> allows users to define a drm.debug style (bitmap) sysfs interface, and
> to specify the desired mapping from bits[0-N] to the format-prefix'd
> pr_debug()s to be controlled.
>
> DEFINE_
On 8/22/21 6:19 PM, Jim Cromie wrote:
> Add a const void* data member to the struct, to allow attaching
> private data that will be used soon by a setter method (via kp->data)
> to perform more elaborate actions.
>
> To attach the data at compile time, add new macros:
>
> module_param_cb_data(
On 8/22/21 6:19 PM, Jim Cromie wrote:
> This patchset does 3 main things.
>
> Adds DEFINE_DYNAMIC_DEBUG_CATEGORIES to define bitmap => category
> control of pr_debugs, and to create their sysfs entries.
>
> Uses it in amdgpu, i915 to control existing pr_debugs according to
> their ad-hoc categ
On 2021-08-26 12:55 a.m., Liu, Monk wrote:
[AMD Official Use Only]
But for timer pending case (common case) your mod_delayed_work will effectively
do exactly the same if you don't use per job TTLs - you mod it to
sched->timeout value which resets the pending timer to again count from 0.
issue:
in cleanup_job the cancle_delayed_work will cancel a TO timer
even the its corresponding job is still running.
fix:
do not cancel the timer in cleanup_job, instead do the cancelling
only when the heading job is signaled, and if there is a "next" job
we start_timeout again.
v2:
further clea
[AMD Official Use Only]
>>But for timer pending case (common case) your mod_delayed_work will
>>effectively do exactly the same if you don't use per job TTLs - you mod it to
>> sched->timeout value which resets the pending timer to again count from 0.
Ny patch will only modify the timer (resta
On Wed, Aug 25, 2021 at 11:39:10AM +0100, Tvrtko Ursulin wrote:
>
> On 27/07/2021 01:23, Matthew Brost wrote:
> > When using GuC submission, if a context gets banned disable scheduling
> > and mark all inflight requests as complete.
> >
> > Cc: John Harrison
> > Signed-off-by: Matthew Brost
> >
Lock the xarray and take ref to the context if needed.
v2:
(Checkpatch)
- Add new line after declaration
(Daniel Vetter)
- Correct put / get accounting in xa_for_loops
v3:
(Checkpatch)
- Extra new line
Reviewed-by: Daniele Ceraolo Spurio
Signed-off-by: Matthew Brost
---
.../gpu/drm/i9
Move GuC management fields in context under guc_active struct as this is
where the lock that protects theses fields lives. Also only set guc_prio
field once during context init.
v2:
(Daniele)
- set CONTEXT_SET_INIT
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_context_types.
s/static inline/static/g + fix function argument alignment to make
checkpatch happy.
Signed-off-by: Matthew Brost
---
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 116 +-
1 file changed, 57 insertions(+), 59 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submi
It isn't safe to scrub for missing G2H or continue with the reset until
all G2H processing is complete. Flush the G2H work queue during reset to
ensure it is done running. No need to call the IRQ handler directly
either as the scrubbing code can deal with any missing G2H.
Fixes: eb5e7da736f3 ("drm
Before we did some clever tricks to not use the a lock when touching
guc_state.sched_state in certain cases. Don't do that, enforce the use
of the lock.
Part of this is removing a dead code path from guc_lrc_desc_pin where a
context could be deregistered when the aforementioned function was
called
A subsequent patch will flip the locking hierarchy from
ce->guc_state.lock -> sched_engine->lock to sched_engine->lock ->
ce->guc_state.lock. As such we need to release the submit fence for a
request from an IRQ to break a lock inversion - i.e. the fence must be
release went holding ce->guc_state.l
To make ownership of locking clear move fields (guc_id, guc_id_ref,
guc_id_link) to sub structure guc_id in intel_context.
Reviewed-by: Daniele Ceraolo Spurio
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_context.c | 4 +-
drivers/gpu/drm/i915/gt/intel_context_types.h |
When the GuC does a media reset, it copies a golden context state back
into the corrupted context's state. The address of the golden context
and the size of the engine state restore are passed in via the GuC ADS.
The i915 had a bug where it passed in the whole size of the golden
context, not the si
Drop pin count check trick between a sched_disable and re-pin, now rely
on the lock and counter of the number of committed requests to determine
if scheduling should be disabled on the context.
Reviewed-by: Daniele Ceraolo Spurio
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_co
Move guc_blocked fence to struct guc_state as the lock which protects
the fence lives there.
s/ce->guc_blocked/ce->guc_state.blocked/g
v2:
(Daniele)
- s/blocked_fence/blocked/g
Reviewed-by: Daniele Ceraolo Spurio
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_context.c
Add GuC kernel doc for all structures added thus far for GuC submission
and update the main GuC submission section with the new interface
details.
v2:
- Drop guc_active.lock DOC
v3:
(Daniele)
- Fixup a few kernel doc comments
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_con
Rework and simplify the locking with GuC subission. Drop
sched_state_no_lock and move all fields under the guc_state.sched_state
and protect all these fields with guc_state.lock . This requires
changing the locking hierarchy from guc_state.lock -> sched_engine.lock
to sched_engine.lock -> guc_state
Reset LRC descriptor if a context register returns -ENODEV as this means
we are mid-reset.
Fixes: eb5e7da736f3 ("drm/i915/guc: Reset implementation for new GuC interface")
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 6
Don't drop ce->guc_active.lock when unwinding a context after reset.
At one point we had to drop this because of a lock inversion but that is
no longer the case. It is much safer to hold the lock so let's do that.
Fixes: eb5e7da736f3 ("drm/i915/guc: Reset implementation for new GuC interface")
Rev
Now that we have locking hierarchy of sched_engine->lock ->
ce->guc_state everything from guc_active can be moved into guc_state and
protected the guc_state.lock.
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
---
drivers/gpu/drm/i915/gt/intel_context.c | 10 +--
drivers
Error captures can now be done in a work queue processing G2H messages.
These messages need to be completely done being processed in the reset
path, to avoid races in the missing G2H cleanup, which create a
dependency on memory allocations and dma fences (i915_requests).
Requests depend on resets,
Kick tasklet after queuing a request so it submitted in a timely manner.
Fixes: 3a4cdf1982f0 ("drm/i915/guc: Implement GuC context operations for new
inteface")
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 1 +
1 file
While debugging an issue with full GT resets I went down a rabbit hole
thinking the scrubbing of lost G2H wasn't working correctly. This proved
to be incorrect as this was working just fine but this chase inspired me
to write a selftest to prove that this works. This simple selftest
injects errors
A context can get destroyed after cancelling a request, if a context or
GT reset occurs, so take a reference to context when cancelling a
request.
Fixes: 62eaf0ae217d ("drm/i915/guc: Support request cancellation")
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
---
drivers/gpu/
If the context is reset as a result of the request cancellation the
context reset G2H is received after schedule disable done G2H which is
the wrong order. The schedule disable done G2H release the waiting
request cancellation code which resubmits the context. This races
with the context reset G2H
Propagating errors to dependent fences is broken and can lead to
errors from one client ending up in another. In 3761baae908a (Revert
"drm/i915: Propagate errors on awaiting already signaled fences"), we
attempted to get rid of fence error propagation but missed the case
added in 8e9f84cf5cac ("dr
When unblocking a context, do not enable scheduling if the context is
banned, guc_id invalid, or not registered.
v2:
(Daniele)
- Add helper for unblock
Fixes: 62eaf0ae217d ("drm/i915/guc: Support request cancellation")
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
Cc:
---
Daniel Vetter pointed out that locking in the GuC submission code was
overly complicated, let's clean this up a bit before introducing more
features in the GuC submission backend.
Also fix some CI failures, port fixes from our internal tree, and add a
few more selftests for coverage.
Lastly, add
Add a cancel request selftest that results in an engine reset to cancel
the request as it is non-preemptable. Also insert a NOP request after
the cancelled request and confirm that it completely successfully.
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/selftests/i915_request.c | 100 ++
Rather than processing 1 G2H at a time and re-queuing the work queue if
more messages exist, process all the G2H in a single pass of the work
queue.
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
Cc: Daniel Vetter
Cc: Michal Wajdeczko
---
drivers/gpu/drm/i915/gt/uc/intel_guc
A small race that could result in incorrect accounting of the number
of outstanding G2H. Basically prior to this patch we did not increment
the number of outstanding G2H if we encoutered a GT reset while sending
a H2G. This was incorrect as the context state had already been updated
to anticipate a
When unwinding requests on a reset context, if other requests in the
context are in the priority list the requests could be resubmitted out
of seqno order. Traverse the list of active requests in reverse and
append to the head of the priority list to fix this.
Fixes: eb5e7da736f3 ("drm/i915/guc: R
Prior to this patch the blocked context counter was cleared on
init_sched_state (used during registering a context & resets) which is
incorrect. This state needs to be persistent or the counter can read the
incorrect value resulting in scheduling never getting enabled again.
Fixes: 62eaf0ae217d ("
Hi Dave, Daniel,
A few last fixes for 5.14.
The following changes since commit daa7772d477ec658dc1fd9127549a7996d8e0c2b:
Merge tag 'amd-drm-fixes-5.14-2021-08-18' of
https://gitlab.freedesktop.org/agd5f/linux into drm-fixes (2021-08-20 15:13:56
+1000)
are available in the Git repository at:
On 2021-08-25 10:31 p.m., Liu, Monk wrote:
[AMD Official Use Only]
Hi Andrey
I'm not quite sure if I read you correctly
Seems to me you can only do it for empty pending list otherwise you risk
cancelling a legit new timer that was started by the next job or not restarting
timer at all sin
On 26-08-21, 08:24, Viresh Kumar wrote:
> On 25-08-21, 18:41, Dmitry Osipenko wrote:
> > Thinking a bit more about this, I got a nicer variant which actually works
> > in all cases for Tegra.
> >
> > Viresh / Ulf, what do you think about this:
>
> This is what I have been suggesting from day 1 :
On 25-08-21, 18:41, Dmitry Osipenko wrote:
> Thinking a bit more about this, I got a nicer variant which actually works in
> all cases for Tegra.
>
> Viresh / Ulf, what do you think about this:
This is what I have been suggesting from day 1 :)
https://lore.kernel.org/linux-staging/2021081805584
[AMD Official Use Only]
Hi Andrey
I'm not quite sure if I read you correctly
>>Seems to me you can only do it for empty pending list otherwise you risk
>>cancelling a legit new timer that was started by the next job or not
>>restarting timer at all since your timer was still pending when next
Use a more common logging style.
Signed-off-by: zhaoxiao
---
v2:Remove the line: #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
drivers/gpu/drm/msm/adreno/adreno_gpu.c:23:9: warning: 'pr_fmt' macro
redefined [-Wmacro-redefined]
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
^
inclu
From: Anthoine Bourgeois
This implements the context initialization ioctl. A list of params
is passed in by userspace, and kernel driver validates them. The
only currently supported param is VIRTGPU_CONTEXT_PARAM_CAPSET_ID.
If the context has already been initialized, -EEXIST is returned.
This
This advertises the context init feature to userspace, along with
a mask of supported capabilities.
Signed-off-by: Gurchetan Singh
Acked-by: Lingfeng Yang
---
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
The plumbing is all here to do this. Since we always use the
default fence context when allocating a fence, this makes no
functional difference.
We can't process just the largest fence id anymore, since it's
it's associated with different timelines. It's fine for fence_id
260 to signal before 25
For the Sommelier guest Wayland proxy, it's desirable for the
DRM fd to be pollable in response to an host compositor event.
This can also be used by the 3D driver to poll events on a CPU
timeline.
This enables the DRM fd associated with a particular 3D context
to be polled independent of KMS even
Similar to DRM_VMW_EVENT_FENCE_SIGNALED. Sends a pollable event
to the DRM file descriptor when a fence on a specific ring is
signaled.
One difference is the event is not exposed via the UAPI -- this is
because host responses are on a shared memory buffer of type
BLOB_MEM_GUEST [this is the commo
Each fence should be associated with a [fence ID, fence_context,
seqno]. The seqno number is just the fence id.
To get the fence context, we add the ring_idx to the 3D context's
base_fence_ctx. The ring_idx is between 0 and 31, inclusive.
Each 3D context will have it's own base_fence_ctx. The r
These were defined in the previous commit. We'll need these
parameters when allocating a dma_fence. The use case for this
is multiple synchronizations timelines.
The maximum number of timelines per 3D instance will be 32. Usually,
only 2 are needed -- one for CPU commands, and another for GPU
com
We don't want fences from different 3D contexts (virgl, gfxstream,
venus) to be on the same timeline. With explicit context creation,
we can specify the number of ring each context wants.
Execbuffer can specify which ring to use.
Signed-off-by: Gurchetan Singh
Acked-by: Lingfeng Yang
---
driv
This feature allows for each virtio-gpu 3D context to be created
with a "context_init" variable. This variable can specify:
- the type of protocol used by the context via the capset id.
This is useful for differentiating virgl, gfxstream, and venus
protocols by host userspace.
- other th
From: Anthoine Bourgeois
Let's probe for VIRTIO_GPU_F_CONTEXT_INIT.
Create a new DRM_INFO(..) line since the current one is getting
too long.
Signed-off-by: Anthoine Bourgeois
Acked-by: Lingfeng Yang
---
drivers/gpu/drm/virtio/virtgpu_debugfs.c | 1 +
drivers/gpu/drm/virtio/virtgpu_drv.c
The valid capability IDs are between 1 to 63, and defined in the
virtio gpu spec. This is used for error checking the subsequent
patches. We're currently only using 2 capability IDs, so this
should be plenty for the immediate future.
Signed-off-by: Gurchetan Singh
Acked-by: Lingfeng Yang
---
This change allows creating contexts of depending on set of
context parameters. The meaning of each of the parameters
is listed below:
1) VIRTGPU_CONTEXT_PARAM_CAPSET_ID
This determines the type of a context based on the capability set
ID. For example, the current capsets:
VIRTIO_GPU_CAPSET_VI
Dear all,
I'm excited to share with you a ground-breaking and dare I say it even
shocking discovery in graphics virtualization.
The critics are raving -- many keen observers of VM graphics are
proclaiming 2021 to be "the year of the context type", eclipsing the
revolutionary blob resource of 2020
Previously, master_lookup_lock was introduced in
commit 0b0860a3cf5e ("drm: serialize drm_file.master with a new
spinlock") to serialize accesses to drm_file.master. This then allowed
us to write drm_file_get_master in commit 56f0729a510f ("drm: protect
drm_master pointers in drm_lease.c").
The ra
drm_lease_held calls drm_file_get_master. However, this function is
sometimes called while holding on to drm_device.master_rwsem or
modeset_mutex. Since master_rwsem will replace
drm_file.master_lookup_lock in drm_file_get_master in a future patch,
this results in both recursive locking, and an inv
__drm_mode_object_find checks if the given drm file holds the required
lease on a object by calling _drm_lease_held. _drm_lease_held in turn
uses drm_file_get_master to access drm_file.master.
However, in a future patch, the drm_file.master_lookup_lock in
drm_file_get_master will be replaced by dr
In drm_client_modeset.c and drm_fb_helper.c,
drm_master_internal_{acquire,release} are used to avoid races with DRM
userspace. These functions hold onto drm_device.master_rwsem while
committing, and bail if there's already a master.
However, there are other places where modesetting rights can race
In a future patch, a read lock on drm_device.master_rwsem is
held in the ioctl handler before the check for ioctl
permissions. However, this inverts the lock hierarchy of
drm_global_mutex --> master_rwsem.
To avoid this, we do some prep work to grab the drm_global_mutex
before checking for ioctl p
drm_device.master_mutex currently protects the following:
- drm_device.master
- drm_file.master
- drm_file.was_master
- drm_file.is_master
- drm_master.unique
- drm_master.unique_len
- drm_master.magic_map
There is a clear separation between functions that read or change
these attributes. Hence, c
drm_master_release can be called on a drm_file without a master, which
results in a null ptr dereference of file_priv->master->magic_map. The
three cases are:
1. Error path in drm_open_helper
drm_open():
drm_open_helper():
drm_master_open():
drm_new_set_master(); <--- returns -
Hi,
Seems that Intel-gfx CI still doesn't like what's going on, so I updated
the series to remove more recursive locking again.
Note: patch 5 touches a number of files, including the Intel and VMware
drivers, but most changes are simply switching a function call to the
appropriate locked/unlocked
[AMD Official Use Only]
>> All we need to take care of is to cancel_delayed_work() when we know that
>> the job is completed.
That's why I want to remove the cancel_delayed_work in the beginning of
cleanup_job(), because in that moment we don't know if
There is a job completed (sched could be w
Quoting Bjorn Andersson (2021-07-26 16:13:51)
> eDP panels might need some power sequencing and backlight management,
> so make it possible to associate a drm_panel with a DP instance and
> prepare and enable the panel accordingly.
>
> Signed-off-by: Bjorn Andersson
> ---
>
> This solves my immedi
Quoting Bjorn Andersson (2021-08-25 15:25:56)
> Not all platforms has DP_P0 at offset 0x1000 from the beginning of the
> DP block. So split the dss_io_data memory region into a set of
> sub-regions, to make it possible in the next patch to specify each of
> the sub-regions individually.
>
> Signed-
Quoting Bjorn Andersson (2021-08-25 15:25:55)
> diff --git a/drivers/gpu/drm/msm/dp/dp_parser.c
> b/drivers/gpu/drm/msm/dp/dp_parser.c
> index c064ced78278..215065336268 100644
> --- a/drivers/gpu/drm/msm/dp/dp_parser.c
> +++ b/drivers/gpu/drm/msm/dp/dp_parser.c
> @@ -19,40 +19,30 @@ static const
> drm_gem_cma_mmap() cannot assume every implementation of dma_mmap_wc()
> will end up calling remap_pfn_range() (which happens to set the relevant
> vma flag, among others), so in order to make sure expectations around
> VM_DONTEXPAND are met, let it explicitly set the flag like most other
> GEM m
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Add GuC kernel doc for all structures added thus far for GuC submission
and update the main GuC submission section with the new interface
details.
v2:
- Drop guc_active.lock DOC
Signed-off-by: Matthew Brost
---
drivers/gpu/drm/i915/gt/intel_co
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Now that we have locking hierarchy of sched_engine->lock ->
ce->guc_state everything from guc_active can be moved into guc_state and
protected the guc_state.lock.
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ceraolo Spurio
Daniele
---
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Drop pin count check trick between a sched_disable and re-pin, now rely
on the lock and counter of the number of committed requests to determine
if scheduling should be disabled on the context.
Signed-off-by: Matthew Brost
Reviewed-by: Daniele Ce
On 8/25/2021 5:41 PM, Matthew Brost wrote:
On Wed, Aug 25, 2021 at 05:44:11PM -0700, Daniele Ceraolo Spurio wrote:
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Lock the xarray and take ref to the context if needed.
v2:
(Checkpatch)
- Add new line after declaration
(Daniel Vetter)
On Wed, Aug 25, 2021 at 05:44:11PM -0700, Daniele Ceraolo Spurio wrote:
>
>
> On 8/18/2021 11:16 PM, Matthew Brost wrote:
> > Lock the xarray and take ref to the context if needed.
> >
> > v2:
> > (Checkpatch)
> >- Add new line after declaration
> > (Daniel Vetter)
> >- Correct put /
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Lock the xarray and take ref to the context if needed.
v2:
(Checkpatch)
- Add new line after declaration
(Daniel Vetter)
- Correct put / get accounting in xa_for_loops
Signed-off-by: Matthew Brost
---
.../gpu/drm/i915/gt/uc/intel_guc_s
Hi, Enric:
Enric Balletbo i Serra 於 2021年8月25日 週三 下午6:26寫道:
>
> Update device tree binding documentation for the dsi to add the optional
> property to reset the dsi controller.
Reviewed-by: Chun-Kuang Hu
>
> Signed-off-by: Enric Balletbo i Serra
> Acked-by: Rob Herring
> ---
>
> (no changes
The sc8180x has 2 DP and 1 eDP controllers, add support for these to the
DP driver.
Signed-off-by: Bjorn Andersson
---
Changes since v1:
- Squashed DP and eDP data, as there's no reason to keep them separate today
drivers/gpu/drm/msm/dp/dp_display.c | 7 +++
1 file changed, 7 insertions(+)
Functions in the DisplayPort code that relates to individual instances
(encoders) are passed both the struct msm_dp and the struct drm_encoder. But
in a situation where multiple DP instances would exist this means that
the caller need to resolve which struct msm_dp relates to the struct
drm_encoder
The Qualcomm SC8180x has 2 DP controllers and 1 eDP controller, add
compatibles for these to the msm/dp binding.
Reviewed-by: Stephen Boyd
Signed-off-by: Bjorn Andersson
---
Changes since v1:
- Picked up Stephen's R-b
.../devicetree/bindings/display/msm/dp-controller.yaml | 2 ++
1 f
Based on the removal of the g_dp_display and the movement of the
priv->dp lookup into the DP code it's now possible to have multiple
DP instances.
In line with the other controllers in the MSM driver, introduce a
per-compatible list of base addresses which is used to resolve the
"instance id" for
As the Qualcomm DisplayPort driver only supports a single instance of
the driver the commonly used struct dp_display is kept in a global
variable. As we introduce additional instances this obviously doesn't
work.
Replace this with a combination of existing references to adjacent
objects and drvdat
The current implementation supports a single DP instance and the DPU code will
only match it against INTF_DP instance 0. These patches extends this to allow
multiple DP instances and support for matching against DP instances beyond 0.
This is based on v4 of Dmitry's work on multiple DSI interfaces
On Thu 29 Jul 04:59 CDT 2021, Dmitry Baryshkov wrote:
> On 27/07/2021 02:13, Bjorn Andersson wrote:
> > eDP panels might need some power sequencing and backlight management,
> > so make it possible to associate a drm_panel with a DP instance and
> > prepare and enable the panel accordingly.
> >
>
On Thu 12 Aug 19:28 CDT 2021, sbill...@codeaurora.org wrote:
> On 2021-08-12 06:11, Stephen Boyd wrote:
> > Quoting Sankeerth Billakanti (2021-08-11 17:08:01)
[..]
> > > +static int dp_parser_gpio(struct dp_parser *parser)
> > > +{
> > > + struct device *dev = &parser->pdev->dev;
> > > +
On Wed, Aug 25, 2021 at 02:51:11PM -0700, Daniele Ceraolo Spurio wrote:
>
>
> On 8/18/2021 11:16 PM, Matthew Brost wrote:
> > Move GuC management fields in context under guc_active struct as this is
> > where the lock that protects theses fields lives. Also only set guc_prio
> > field once during
On 8/25/2021 4:03 PM, Nick Desaulniers wrote:
On Tue, Aug 24, 2021 at 4:23 PM Nathan Chancellor wrote:
i915 enables a wider set of warnings with '-Wall -Wextra' then disables
several with cc-disable-warning. If an unknown flag gets added to
KBUILD_CFLAGS when building with clang, all subsequen
On Tue, Aug 24, 2021 at 4:23 PM Nathan Chancellor wrote:
>
> i915 enables a wider set of warnings with '-Wall -Wextra' then disables
> several with cc-disable-warning. If an unknown flag gets added to
> KBUILD_CFLAGS when building with clang, all subsequent calls to
> cc-{disable-warning,option} w
On Wed, Aug 25, 2021 at 02:51:11PM -0700, Daniele Ceraolo Spurio wrote:
>
>
> On 8/18/2021 11:16 PM, Matthew Brost wrote:
> > Move GuC management fields in context under guc_active struct as this is
> > where the lock that protects theses fields lives. Also only set guc_prio
> > field once during
In order to deal with multiple memory ranges in the following commit
change the ioremap wrapper to not poke directly into the dss_io_data
struct.
While at it, devm_ioremap_resource() already prints useful error
messages on failure, so omit the unnecessary prints from the caller.
Signed-off-by: Bj
Not all platforms has P0 at an offset of 0x1000 from the base address,
so add support for specifying each sub-region in DT. The code falls back
to the predefined offsets in the case that only a single reg is
specified, in order to support existing DT.
Reviewed-by: Stephen Boyd
Reviewed-by: Abhina
reg was defined as one region covering the entire DP block, but the
memory map is actually split in 4 regions and obviously the size of
these regions differs between platforms.
Switch the reg to require that all four regions are specified instead.
It is expected that the implementation will handle
The non-devres version of ioremap is used, which requires manual
cleanup. But the code paths leading here is mixed with other devres
users, so rely on this for ioremap as well to simplify the code.
Reviewed-by: Abhinav Kumar
Reviewed-by: Stephen Boyd
Signed-off-by: Bjorn Andersson
---
Changes
Not all platforms has DP_P0 at offset 0x1000 from the beginning of the
DP block. So split the dss_io_data memory region into a set of
sub-regions, to make it possible in the next patch to specify each of
the sub-regions individually.
Signed-off-by: Bjorn Andersson
---
Changes since v1:
- Fixed n
It turns out that sc8180x (among others) doesn't have the same internal layout
of the 4 subblocks. This series therefor modifies the binding to require all
four regions to be described individually and then extends the driver to read
these four regions. The driver will fall back to read the old sin
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Move GuC management fields in context under guc_active struct as this is
where the lock that protects theses fields lives. Also only set guc_prio
field once during context init.
Can you explain what we gain by setting that only on first pin? AFAICS
Hi Kees,
Thanks for your patch.
The switch statement is needed for multiple planes which is already approved in
this patch series.
https://patchwork.kernel.org/project/dri-devel/patch/20210728003126.1425028-13-anitha.chrisant...@intel.com/
This patch has dependencies on the previous patches in t
1 - 100 of 214 matches
Mail list logo