On Fri, 2015-10-09 at 10:47 +0200, Daniel Vetter wrote:
> On Fri, Oct 09, 2015 at 08:56:24AM +0100, David Woodhouse wrote:
> > On Fri, 2015-10-09 at 09:28 +0200, Daniel Vetter wrote:
> > > 
> > > Hm if this still works the same way as on older platforms then pagefaults
> > > just read all 0 and writes go nowhere from the gpu. That generally also
> > > explains ever-increasing numbers of the CS execution pointer since it's
> > > busy churning through 48b worth of address space filled with MI_NOP. I'd
> > > have hoped our hw would do better than that with svm ...
> > 
> > I'm looking at simple cases like Jesse's 'gem_svm_fault' test. If the
> > access to process address space (a single dword write) does nothing,
> > I'm not sure why it would then churn through MI_NOOPs; why would the
> > batch still not complete?
> 
> Yeah that testcase doesn't fit, the one I had in mind is where the batch
> itself faults and the CS just reads MI_NOP forever. No idea why the gpu
> just keeps walking through the address space here. Puzzling.

Does it just keep walking through the address space?

When I hacked my page request handler to *not* service the fault and
just say it failed, the batch did seem to complete as normal. Just
without doing the write, as you described.

-- 
David Woodhouse                            Open Source Technology Centre
david.woodho...@intel.com                              Intel Corporation

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to