== Series Details ==
Series: drm/i915/dp: Switch to using the DRM function for reading DP link status
URL : https://patchwork.freedesktop.org/series/11230/
State : failure
== Summary ==
Series 11230v1 drm/i915/dp: Switch to using the DRM function for reading DP
link status
A significant proportion of the cmdparsing time for some batches is the
cost to find the register in the mmiotable. We ensure that those tables
are in ascending order such that we could do a binary search if it was
ever merited. It is.
Signed-off-by: Chris Wilson
Since I have been using the BCS_TIMESTAMP to measure latency of
execution upon the blitter ring, allow regular userspace to also read
from that register. They are already allowed RCS_TIMESTAMP!
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
We track the LRU access for eviction and bump the last access for the
user GGTT on set-to-gtt. When we do so we need to not only bump the
primary GGTT VMA but all partials as well. Similarly we want to
bump the last access tracking for when unpinning an object from the
scanout so that they do not
The single largest factor in the overhead of parsing the commands is the
setup of the virtual mapping to provide a continuous block for the batch
buffer. If we keep those vmappings around (against the better judgement
of mm/vmalloc.c, which we offset by handwaving and looking suggestively
at the
If the command descriptor says to skip it, ignore checking for anyother
other conflict.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/i915_cmd_parser.c | 3 +++
1 file changed, 3 insertions(+)
diff --git
On the blitter (and in test code), we see long sequences of repeated
commands, e.g. XY_PIXEL_BLT, XY_SCANLINE_BLT, or XY_SRC_COPY. For these,
we can skip the hashtable lookup by remembering the previous command
descriptor and doing a straightforward compare of the command header.
The corollary is
If the developer adds a register in the wrong order, we BUG during boot.
That makes development and testing very difficult. Let's be a bit more
friendly and disable the command parser with a big warning if the tables
are invalid.
Signed-off-by: Chris Wilson
Reviewed-by:
If we need to use clflush to prepare our batch for reads from memory, we
can bypass the cache instead by using non-temporal copies.
Signed-off-by: Chris Wilson
Reviewed-by: Matthew Auld
---
drivers/gpu/drm/i915/i915_cmd_parser.c | 73
We want to always use the partial VMA as a fallback for a failure to
bind the object into the GGTT. This extends the support partial objects
in the GGTT to cover everything, not just objects too large.
v2: Call the partial view, view not partial.
Signed-off-by: Chris Wilson
Since commit 43566dedde54 ("drm/i915: Broaden application of
set-domain(GTT)") we allowed objects to be in the GTT domain, but unbound.
Therefore removing the GTT cache domain when removing the GGTT vma is no
longer semantically correct.
An unfortunate side-effect is we lose the wondrously named
Our current practice is to only name the actual list (here
dev_priv->fence_list) using "list", and elements upon that list are
referred to as "link". Further, the lru nature is of the list and not of
the node and including in the name does not disambiguate the link from
anything else.
Often times we do not want to evict mapped objects from the GGTT as
these are quite expensive to teardown and frequently reused (causing an
equally, if not more so, expensive setup). In particular, when faulting
in a new object we want to avoid evicting an active object, or else we
may trigger a
In order to support setting up fences for partial mappings of an object,
we have to align those mappings with the fence. The minimum chunksize we
choose is at least the size of a single tile row.
v2: Make minimum chunk size a define for later use
Signed-off-by: Chris Wilson
For simplicity, we want to continue using a contiguous mapping of the
command buffer, but we can reduce the number of vmappings we hold by
switching over to a page-by-page copy from the user batch buffer to the
shadow. The cost for saving one linear mapping is about 5% in trivial
workloads - which
The existing code's hashfunction is very suboptimal (most 3D commands
use the same bucket degrading the hash to a long list). The code even
acknowledge that the issue was known and the fix simple:
/*
* If we attempt to generate a perfect hash, we should be able to look at bits
* 31:29 of a
If we want to create a partial vma from a chunk that is the same size as
the object, create a normal ggtt vma instead. The benefit is that it
will match future requests for the normal ggtt.
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
Keep any error reported by the gup_worker until we are notified that the
arena has changed (via the mmu-notifier). This has the importance of
making two consecutive calls to i915_gem_object_get_pages() reporting
the same error, and curtailing a loop of detecting a fault and requeueing
a
As we cannot access the backing pages behind stolen objects, we should
not attempt to do so for relocations.
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +++
1 file
If we want to read the pages directly via the CPU, we have to be sure
that we have to flush the writes via the GTT (as the CPU can not see
the address aliasing).
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
---
The existing ABI says that scanouts are pinned into the mappable region
so that legacy clients (e.g. old Xorg or plymouthd) can write directly
into the scanout through a GTT mapping. However if the surface does not
fit into the mappable region, we are better off just trying to fit it
anywhere and
This is a companion to i915_gem_obj_prepare_shmem_read() that prepares
the backing storage for direct writes. It first serialises with the GPU,
pins the backing storage and then indicates what clfushes are required in
order for the writes to be coherent.
Whilst here, fix support for ancient CPUs
If we cannot release the fence (for example if someone is inexplicably
trying to write into a tiled framebuffer that is currently pinned to the
display! *cough* kms_frontbuffer_tracking *cough*) fallback to using the
page-by-page pwrite/pread interface, rather than fail the syscall
entirely.
When using the aliasing ppgtt and pageflipping with the shrinker/eviction
active, we note that we often have to rebind the backbuffer before
flipping onto the scanout because it has an invalid alignment. If we
store the worst-case alignment required for a VMA, we can avoid having
to rebind at
As pwrite does not use the fence for its GTT access, and may even go
through a secondary interface avoiding the main VMA, we cannot treat the
write as automatically invalidated by the hardware and so we require
ORIGIN_CPU frontbufer invalidate/flushes.
Signed-off-by: Chris Wilson
When doing relocations, we have to obtain a mapping to the page
containing the target address. This is either a kmap or iomap depending
on GPU and its cache coherency. Neighbouring relocation entries are
typically within the same page and so we can cache our kmapping between
them and avoid those
If we have stolen available, make use of it for ringbuffer allocation.
Previously this was restricted to !llc platforms, as writing to stolen
requires a GGTT mapping - but now that we have partial mappable support,
the mappable aperture isn't quite so precious so we can use it more
freely and
If we cannot pin the entire object into the mappable region of the GTT,
try to pin a single page instead. This is much more likely to succeed,
and prevents us falling back to the clflush slow path.
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
There is an improbable, but not impossible, case that if we leave the
pages unpin as we operate on the object, then somebody via the shrinker
may steal the lock (which lock? right now, it is struct_mutex, THE lock)
and change the cache domains after we have already inspected them.
(Whilst here,
Similarly to invalidating beforehand, if the object is mmapped via
I915_MMAP_WC we cannot track writes through the I915_GEM_DOMAIN_GTT. At
the conclusion of the write, i915_gem_object_flush_gtt_writes() we also
need to treat the origin carefully in case it may have been untracked.
See also commit
With the introduction of the reloc page cache, we are just one step away
from refactoring the relocation write functions into one. Not only does
it tidy the code (slightly), but it greatly simplifies the control logic
much to gcc's satisfaction.
v2: Add selftests to document the relationship
Now that we have WC vmapping available, we can bind our rings anywhere
in the GGTT and do not need to restrict them to the mappable region.
Except for stolen objects, for which direct access is verbatim and we
must use the mappable aperture.
Signed-off-by: Chris Wilson
Since we know the write domain, we can drop the local variable and make
the code look a tiny bit simpler.
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
---
drivers/gpu/drm/i915/i915_gem.c | 15 ---
1 file
If we quickly switch from writing through the GTT to a read of the
physical page directly with the CPU (e.g. performing relocations through
the GTT and then running the command parser), we can observe that the
writes are not visible to the CPU. It is not a coherency problem, as
extensive
By moving map-and-fenceable tracking from the object to the VMA, we gain
fine-grained tracking and the ability to track individual fences on the VMA
(subsequent patch).
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
---
In order to handle tiled partial GTT mmappings, we need to associate the
fence with an individual vma.
v2: A couple of silly drops replaced spotted by Joonas
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
---
== Series Details ==
Series: drm/i915: Mark up the GTT flush following WC writes as ORIGIN_CPU
URL : https://patchwork.freedesktop.org/series/11229/
State : failure
== Summary ==
Applying: drm/i915: Mark up the GTT flush following WC writes as ORIGIN_CPU
fatal: sha1 information is lacking or
== Series Details ==
Series: series starting with [1/2] drm/i915: Use ORIGIN_CPU for fb invalidation
from pwrite
URL : https://patchwork.freedesktop.org/series/11227/
State : failure
== Summary ==
Series 11227v1 Series without cover letter
On 8/17/2016 9:07 PM, Goel, Akash wrote:
On 8/17/2016 6:41 PM, Imre Deak wrote:
On ke, 2016-08-17 at 18:15 +0530, Goel, Akash wrote:
On 8/17/2016 5:11 PM, Chris Wilson wrote:
On Wed, Aug 17, 2016 at 12:27:30PM +0100, Tvrtko Ursulin wrote:
+int intel_guc_suspend(struct drm_device
This just contains the base property classes and all the code to
handle blobs. I think for any kind of standardized/shared properties
it's better to have separate files - this is fairly big already as-is.
v2: resurrect misplaced hunk (Daniel Stone)
Cc: Daniel Stone
I figured an overview section here is overkill, and better
to just document the 2 structures themselves well enough.
Signed-off-by: Daniel Vetter
---
drivers/gpu/drm/drm_mode_object.c | 9 +++
include/drm/drm_mode_object.h | 50
It's part of the drm fourcc handling code, mapping the old depth/bpp
values to new fourcc codes.
Cc: Laurent Pinchart
Signed-off-by: Daniel Vetter
---
drivers/gpu/drm/drm_crtc.c | 43 ---
Just for the struct drm_mode_object base class. The header file was
already partially extracted to help untangle the include loops.
v2:
- Also move the generic get/set property ioctls. At first this seemed
like a bad idea since it requires making drm_mode_crtc_set_obj_prop
non-static. But
- remove kerneldoc for drm-internal functions
- drm_property_replace_global_blob isn't actually atomic, and doesn't
need to be. Update docs to match
- document all the types and try to link things a bit better
- nits all over
Signed-off-by: Daniel Vetter
---
It's only used in drm_mode_object_get_properties, and we can compute
it there directly with a bit of code shuffling.
Signed-off-by: Daniel Vetter
---
drivers/gpu/drm/drm_mode_object.c | 31 ---
include/drm/drm_mode_object.h | 2 +-
2
They work exactly the same now, after the refcounting unification a bit
ago. The only reason they're distinct is backwards compat with existing
userspace.
Cc: Daniel Stone
Signed-off-by: Daniel Vetter
---
drivers/gpu/drm/drm_property.c | 23
- Move missing bits into struct drm_encoder docs.
- Explain that encoders are 95% internal and only 5% uapi, and that in
general the uapi part is broken.
- Remove verbose comments for functions not exposed to drivers.
Signed-off-by: Daniel Vetter
---
Same treatment as before. Only hiccup is drm_crtc_mask, which
unfortunately can't be resolved until drm_crtc.h is less of a monster.
Untangle the header loop with a forward delcaration for that static
inline.
Signed-off-by: Daniel Vetter
---
Em Qua, 2016-08-17 às 19:20 +0100, Chris Wilson escreveu:
> Similarly to invalidating beforehand, if the object is mmapped via
> I915_MMAP_WC we cannot track writes through the I915_GEM_DOMAIN_GTT.
> At
> the conclusion of the write, i915_gem_object_flush_gtt_writes() we
> also
> need to treat the
Em Qua, 2016-08-17 às 20:49 +0100, Chris Wilson escreveu:
> On Wed, Aug 17, 2016 at 04:41:44PM -0300, Paulo Zanoni wrote:
> >
> > From: Chris Wilson
> >
> > intel_fbc_pre_update() depends upon the new state being already
> > pinned
> > in place in the Global GTT
If we're enabling a pipe, we'll need to modify the watermarks on all
active planes. Since those planes won't be added to the state on
their own, we need to add them ourselves.
Signed-off-by: Lyude
Reviewed-by: Matt Roper
Cc: sta...@vger.kernel.org
Patches that actually changed (please re-review:
drm/i915/gen6+: Interpret mailbox error flags
drm/i915/skl: Add support for the SAGV, fix underrun hangs
drm/i915/skl: Update DDB values atomically with wms/plane attrs
Everything else is the same. Updated version of
Now that we can hook into update_crtcs and control the order in which we
update CRTCs at each modeset, we can finish the final step of fixing
Skylake's watermark handling by performing DDB updates at the same time
as plane updates and watermark updates.
The first major change in this patch is
Since we have to write ddb allocations at the same time as we do other
plane updates, we're going to need to be able to control the order in
which we execute modesets on each pipe. The easiest way to do this is to
just factor this section of intel_atomic_commit_tail()
(intel_atomic_commit() for
Since the watermark calculations for Skylake are still broken, we're apt
to hitting underruns very easily under multi-monitor configurations.
While it would be lovely if this was fixed, it's not. Another problem
that's been coming from this however, is the mysterious issue of
underruns causing
Thanks to Ville for suggesting this as a potential solution to pipe
underruns on Skylake.
On Skylake all of the registers for configuring planes, including the
registers for configuring their watermarks, are double buffered. New
values written to them won't take effect until said registers are
In order to add proper support for the SAGV, we need to be able to know
what the cause of a failure to change the SAGV through the pcode mailbox
was. The reasoning for this is that some very early pre-release Skylake
machines don't actually allow you to control the SAGV on them, and
indicate an
From: Matt Roper
When we write watermark values to the hardware, those values are stored
in dev_priv->wm.skl_hw. However with recent watermark changes, the
results structure we're copying from only contains valid watermark and
DDB values for the pipes that are
On Wed, Aug 17, 2016 at 04:41:44PM -0300, Paulo Zanoni wrote:
> From: Chris Wilson
>
> intel_fbc_pre_update() depends upon the new state being already pinned
> in place in the Global GTT (primarily for both fencing which wants both
> an offset and a fence register, if
From: Chris Wilson
intel_fbc_pre_update() depends upon the new state being already pinned
in place in the Global GTT (primarily for both fencing which wants both
an offset and a fence register, if assigned). This requires the call to
intel_fbc_pre_update() be after
On Wed, 2016-08-17 at 19:27 +, Sedat Dilek wrote:
>
> > > The call-trace is reproducible with my setup and seen on every
> > > boot.
Might have been nice to keep the backtrace when adding new people :)
I found it here:
https://lists.freedesktop.org/archives/intel-gfx/2016-July/100695.html
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by: Rodrigo Vivi
Introducing a GEN3_FEATURES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for i915g, i915gm, i945g, i945gm, g33 and pnv.
CC: Ben Widawsky
Signed-off-by:
Make the .hws_needs_physical the exception by switching the flag
on earlier platforms since they are fewer to support. Remove the flag on
later GPUs hardware since they all use GTT hws by default.
Switch the logic as well in the driver to reflect this change
Signed-off-by: Carlos Santa
Introducing a GEN4_FEATURES macro to simplify the struct
definitions by platforms given that most of the features are common.
Inspired by the GEN7_FEATURES macro done by Ben W. and others.
Use it for i965g, i965gm, g45 and gm45.
CC: Ben Widawsky
Signed-off-by: Carlos Santa
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
Introducing a GEN5_FEATURES macro to simplify the struct
definitions by platforms given that most of the features are common.
Inspired by the GEN7_FEATURES macro done by Ben W. and others.
Use it for ilk.
CC: Ben Widawsky
Signed-off-by: Carlos Santa
[patch series] Moving all GPU features to the platform struct definition
allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct definition
Signed-off-by: Carlos Santa
Reviewed-by:
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by: Rodrigo Vivi
As recommended by Ville Syrjala removing .is_mobile field from the
platform struct definition for vlv and hsw+ GPUs as there's no need to
make the distinction in later hardware anymore. Keep it for older GPUs
as it is still needed for ilk-ivb.
Signed-off-by: Carlos Santa
Introducing a GEN2_FEATURES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for 830, 845g, i85x, i865g.
CC: Ben Widawsky
Signed-off-by: Carlos Santa
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
---
No need for HAS_CORE_RING_FREQ as that flag is actually the same as
.has_llc. Feedback from V. Syrjala.
Signed-off-by: Carlos Santa
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 4
2 files changed, 1 insertion(+), 5
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platform
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by: Rodrigo Vivi
Remove runtime PM support for SNB as it breaks hotplug support.
Feedback from V. Syrjala.
Signed-off-by: Carlos Santa
---
drivers/gpu/drm/i915/i915_pci.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
Introducing a GEN6_FEAUTRES macro to simplify the struct definitions by
platforms given that most of the features are common. Inspired by the
GEN7_FEATURES macro done by Ben W. and others.
Use it for snb.
CC: Ben Widawsky
Signed-off-by: Carlos Santa
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by: Rodrigo Vivi
Moving all GPU features to the platform definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by: Rodrigo Vivi
- organize most GPU features so that they are easy to group by platforms.
It seems some of the ground work was already done for Gen7 features.
- make most of these GPU features now a device_info flag also based on
previous work done by others. The idea is here is to have a central place
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
Moving all GPU features to the platform struct definition allows for
- standard place when adding new features from new platforms
- possible to see supported features when dumping struct
definitions
Signed-off-by: Carlos Santa
Reviewed-by:
On Fri, Jul 15, 2016 at 10:40 AM, Chris Wilson wrote:
> On Fri, Jul 15, 2016 at 08:00:25AM +0200, Sedat Dilek wrote:
>> Hi,
>>
>> I see the below call-trace with latest d-i-n, guess latest linux-next
>> will cause same issues.
>> ( Beyond this, there exist also a build
Since a DRM function that reads link DP link status is available, let's
use that instead of the i915 clone.
drm_dp_dpcd_read_link_status() returns a negative error code if the number
of bytes read is not DP_LINK_STATUS_SIZE, drm_dp_dpcd_access() does the
length check.
Signed-off-by: Dhinakaran
Please ignore this, I am resubmitting this as an independent patch.
-DK
On Thu, 2016-08-11 at 13:49 -0700, Dhinakaran Pandiyan wrote:
> Since a DRM function that reads link DP link status is available, let's
> use that instead of the i915 clone.
>
> drm_dp_dpcd_read_link_status() returns a
Similarly to invalidating beforehand, if the object is mmapped via
I915_MMAP_WC we cannot track writes through the I915_GEM_DOMAIN_GTT. At
the conclusion of the write, i915_gem_object_flush_gtt_writes() we also
need to treat the origin carefully in case it may have been untracked.
See also commit
-Original Message-
From: Jani Nikula [mailto:jani.nik...@linux.intel.com]
Sent: Tuesday, August 16, 2016 4:24 AM
To: Srivatsa, Anusha ;
intel-gfx@lists.freedesktop.org
Cc: Pandiyan, Dhinakaran
Subject: Re: [Intel-gfx] [PATCH v2
If we cannot release the fence (for example if someone is inexplicably
trying to write into a tiled framebuffer that is currently pinned to the
display! *cough* kms_frontbuffer_tracking *cough*) fallback to using the
page-by-page pwrite/pread interface, rather than fail the syscall
entirely.
As pwrite does not use the fence for its GTT access, and may even go
through a secondary interface avoiding the main VMA, we cannot treat the
write as automatically invalidated by the hardware and so we require
ORIGIN_CPU frontbufer invalidate/flushes.
Signed-off-by: Chris Wilson
On 12 August 2016 at 16:07, Chris Wilson wrote:
> If we need to use clflush to prepare our batch for reads from memory, we
> can bypass the cache instead by using non-temporal copies.
>
> Signed-off-by: Chris Wilson
> ---
>
tree: git://anongit.freedesktop.org/drm-intel drm-intel-next-queued
head: 8d970654b767ebe8aeb524d30e27b37c0cb8eaed
commit: 6687c9062c46c83e5a07df65015eb4fc9dc76524 [2/12] drm/i915: Rewrite fb
rotation GTT handling
config: x86_64-randconfig-s3-08172039 (attached as .config)
compiler: gcc-6
On 8/17/2016 6:41 PM, Imre Deak wrote:
On ke, 2016-08-17 at 18:15 +0530, Goel, Akash wrote:
On 8/17/2016 5:11 PM, Chris Wilson wrote:
On Wed, Aug 17, 2016 at 12:27:30PM +0100, Tvrtko Ursulin wrote:
On 17/08/16 11:14, akash.g...@intel.com wrote:
From: Akash Goel
On Wed, Aug 17, 2016 at 04:10:00PM +0100, Tvrtko Ursulin wrote:
>
> On 17/08/16 15:44, Chris Wilson wrote:
> >On Wed, Aug 17, 2016 at 03:36:51PM +0100, Tvrtko Ursulin wrote:
> >>On 17/08/16 11:05, Chris Wilson wrote:
> >>>On Wed, Aug 17, 2016 at 10:57:34AM +0100, Tvrtko Ursulin wrote:
>
>
Chris Wilson writes:
> Rather than walk the full array of engines checking whether each is in
> the mask in turn, we can use the mask to jump to the right engines. This
> should quicker for a sparse array of engines or mask, whilst generating
> smaller code:
>
I just
On 17/08/16 15:44, Chris Wilson wrote:
On Wed, Aug 17, 2016 at 03:36:51PM +0100, Tvrtko Ursulin wrote:
On 17/08/16 11:05, Chris Wilson wrote:
On Wed, Aug 17, 2016 at 10:57:34AM +0100, Tvrtko Ursulin wrote:
On 17/08/16 10:41, Chris Wilson wrote:
On Wed, Aug 17, 2016 at 10:34:18AM +0100,
On Wed, 2016-08-17 at 12:37 +0300, Joonas Lahtinen wrote:
> On ma, 2016-08-15 at 09:26 -0700, Jesse Barnes wrote:
> >
> > On Mon, 2016-08-15 at 15:34 +0300, Mika Kuoppala wrote:
> > >
> > >
> > > No idea yet why we would need to limit for rcs only.
> > >
> > I went back and forth; I think I
Chris Wilson writes:
> As we know by inspection whether any engine is still busy as we retire
> all the requests, we can pass that information back via return value
> rather than check again afterwards.
>
> v2: A little more polish missed in patch splitting
>
>
On Wed, Aug 17, 2016 at 03:36:51PM +0100, Tvrtko Ursulin wrote:
>
> On 17/08/16 11:05, Chris Wilson wrote:
> >On Wed, Aug 17, 2016 at 10:57:34AM +0100, Tvrtko Ursulin wrote:
> >>
> >>On 17/08/16 10:41, Chris Wilson wrote:
> >>>On Wed, Aug 17, 2016 at 10:34:18AM +0100, Tvrtko Ursulin wrote:
>
1 - 100 of 231 matches
Mail list logo