On Wed, Oct 05, 2011 at 05:59:31PM -0700, Eric Anholt wrote:
> On Wed, 5 Oct 2011 15:57:13 -0700, Ben Widawsky <[email protected]> wrote:
> > I think we also want a TLB invalidate here, bit 18.  This requires another
> > workaround before issuing this flush: We need 2 Store Data Commands (such as
> > MI_STORE_DATA_IMM or MI_STORE_DATA_INDEX) before sending PIPE_CONTROL w/ 
> > stall
> > (20) and TLB inv bit (18) set
> 
> From the docs for GFX_MODE:
> 
>     "This field controls the invalidation if the TLB cache inside the
>      hardware. When enabled this bit limits the invalidation of the TLB
>      only to batch buffer boundaries or to pipe_control commands which
>      have the TLB invalidation bit set. If disabled, the TLB caches are
>      flushed for every full flush of the pipeline"
> 
> We're already getting TLB invalidate at batchbuffer boundaries
> (actually, even more: every pipeline stall, since that bit is 0 on my
> hardware).  What would we need this new flush for?

Does this only mean after each batchbuffer (MI_BATCH_BUFFER_END) or also
before actually jumping to the location at MI_BATCH_BUFFER_START? If so,
what happens when we map or unmap, in between batches? Won't the TLBs
need to be flushed in between?

Also, wouldn't it be nice for the kernel to flush TLBs without having to
submit a batch (no specific use case in mind for this)? 

Ben

Attachment: pgpPO7klSPGk1.pgp
Description: PGP signature

_______________________________________________
Intel-gfx mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to