On Wed, 11 Feb 2026 11:08:55 +0100
Philipp Stanner <[email protected]> wrote:

> On Wed, 2026-02-11 at 10:57 +0100, Danilo Krummrich wrote:
> > (Cc: Xe maintainers)
> > 
> > On Tue Feb 10, 2026 at 12:40 PM CET, Alice Ryhl wrote:  
> > > On Tue, Feb 10, 2026 at 11:46:44AM +0100, Christian König wrote:  
> > > > On 2/10/26 11:36, Danilo Krummrich wrote:  
> > > > > On Tue Feb 10, 2026 at 11:15 AM CET, Alice Ryhl wrote:  
> > > > > >   
> 
> […]
> 
> > > > > 
> > > > > Or in other words, there must be no more than wq->max_active - 1 
> > > > > works that
> > > > > execute code violating the DMA fence signalling rules.  
> > > 
> > > Ouch, is that really the best way to do that? Why not two workqueues?  
> > 
> > Most drivers making use of this re-use the same workqueue for multiple GPU
> > scheduler instances in firmware scheduling mode (i.e. 1:1 relationship 
> > between
> > scheduler and entity). This is equivalent to the JobQ use-case.
> > 
> > Note that we will have one JobQ instance per userspace queue, so sharing the
> > workqueue between JobQ instances can make sense.  
> 
> Why, what for?

Because, even if it's not necessarily a 1:N relationship between queues
and threads these days (with the concept of shared worker pools), each
new workqueue usually imply the creation of new threads/resources, and
we usually don't need to have this level of parallelization (especially
if the communication channel with the FW can't be accessed
concurrently).

Reply via email to