On Thu, Jan 27, 2022 at 12:51:19PM +0300, Alex Beakes wrote:
> On Tue, Jan 25, 2022 at 2:25:23, Jonathan Gray wrote:
> > On Mon, Jan 24, 2022 at 05:57:01PM -0700, Thomas Frohwein wrote:
> > > On Tue, 25 Jan 2022 10:59:26 +1100
> > > Jonathan Gray <[email protected]> wrote:
> > > 
> > > > > if you revert the previous and try this does it still boot?
> > > >
> > > > this would be more interesting to try
> > > >
> > > > corresponds to 'drm/i915: Add object locking to vm_fault_cpu'
> > > > 9fa1f4785f2a54286ccb8a850cda5661f0a3aaf9
> > > >
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?&id=9fa1f4785f2a54286ccb8a850cda5661f0a3aaf9
> > > >  \
> > > >
> > > 
> > > Unfortunately both diffs (this one and the one from earlier today for
> > > drm_linux.c) with the pool_debug patch reversed again lead to the same
> > > pmap crash on boot with the MP kernel. Of note, someone posted the
> > > same
> > > panic on reddit with screenshots. The mentioned T14s gen 2 appears to
> > > be another Tiger Lake model. There is some more detailed that they
> > > queried from ddb in the link than what I reported:
> > > 
> > > https://old.reddit.com/r/openbsd/comments/sblbr2/kernel_panic_when_installing_70current_the/
> > > 
> > 
> > Another missed part in 988d4ff6e3c2220d13d8dde22a98945b64fd7977
> > drm/i915: Fix ww locking in shmem_create_from_object
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?&id=988d4ff6e3c2220d13d8dde22a98945b64fd7977
> > 
> > 
> > Index: sys/dev/pci/drm/i915/gem/i915_gem_mman.c
> > ===================================================================
> > RCS file: /cvs/src/sys/dev/pci/drm/i915/gem/i915_gem_mman.c,v
> > retrieving revision 1.7
> > diff -u -p -r1.7 i915_gem_mman.c
> > --- sys/dev/pci/drm/i915/gem/i915_gem_mman.c        23 Jan 2022 22:53:03
> > -0000       1.7
> > +++ sys/dev/pci/drm/i915/gem/i915_gem_mman.c        24 Jan 2022 23:46:19 
> > -0000
> > @@ -563,6 +563,9 @@ vm_fault_cpu(struct i915_mmap_offset *mm
> >             return VM_PAGER_BAD;
> >     }
> > 
> > +   if (i915_gem_object_lock_interruptible(obj, NULL))
> > +           return VM_PAGER_ERROR;
> > +
> >     err = i915_gem_object_pin_pages(obj);
> >     if (err)
> >             goto out;
> > @@ -602,6 +605,7 @@ vm_fault_cpu(struct i915_mmap_offset *mm
> >     i915_gem_object_unpin_pages(obj);
> > 
> > out:
> > +   i915_gem_object_unlock(obj);
> >     uvmfault_unlockall(ufi, NULL, &obj->base.uobj);
> >     return i915_error_to_vmf_fault(err);
> > }
> > Index: sys/dev/pci/drm/i915/gt/shmem_utils.c
> > ===================================================================
> > RCS file: /cvs/src/sys/dev/pci/drm/i915/gt/shmem_utils.c,v
> > retrieving revision 1.2
> > diff -u -p -r1.2 shmem_utils.c
> > --- sys/dev/pci/drm/i915/gt/shmem_utils.c   14 Jan 2022 06:53:13 -0000      
> > 1.2
> > +++ sys/dev/pci/drm/i915/gt/shmem_utils.c   25 Jan 2022 02:15:06 -0000
> > @@ -163,12 +163,13 @@ uao_create_from_object(struct drm_i915_g
> >     struct uvm_object *uao;
> >     void *ptr;
> > 
> > -   if (obj->ops == &i915_gem_shmem_ops) {
> > +   if (i915_gem_object_is_shmem(obj)) {
> >             uao_reference(obj->base.uao);
> >             return obj->base.uao;
> >     }
> > 
> > -   ptr = i915_gem_object_pin_map(obj, I915_MAP_WB);
> > +   ptr = i915_gem_object_pin_map_unlocked(obj,
> > i915_gem_object_is_lmem(obj) ?
> > +                                           I915_MAP_WC : I915_MAP_WB);
> >     if (IS_ERR(ptr))
> >             return ERR_CAST(ptr);
> 
> Is this committed in the 25-Jan-2022 snapshot?
> Because that's the one I've tested.
> I didn't have the time to go through src and build it.

thfr said those changes didn't help, I imagine it would be the same for
you.  Both were committed.

> 
> Wanted to clarify my reddit post:
> There is a problem when the install is encrypted (full disk encryption, w
> bioctl).
> Unencrypted version works fine (got to the login).
> 
> When finding this bug, I tried:
> - sysupgrade -s
> - fresh install w install70.img
> 
> Reproduction:
> 1. encrypt the disk with bioctl -c C -l $yourdisk softraid0
> 2. installation

It is interesting you see a difference with softraid.
I've not been able to reproduce this on a x250 with broadwell/gen8
graphics with root on encrypted softraid.

There were some commits made a few hours ago you should also be
testing with.  So perhaps hold off and try tomorrow.

diff below as changes don't seem to flowing to anoncvs mirrors at the
moment

Index: i915_gem.c
===================================================================
RCS file: /cvs/src/sys/dev/pci/drm/i915/i915_gem.c,v
retrieving revision 1.131
diff -u -p -r1.131 i915_gem.c
--- i915_gem.c  14 Jan 2022 06:53:11 -0000      1.131
+++ i915_gem.c  27 Jan 2022 03:54:17 -0000
@@ -1208,7 +1208,7 @@ void i915_gem_driver_release(struct drm_
 
 static void i915_gem_init__mm(struct drm_i915_private *i915)
 {
-       mtx_init(&i915->mm.obj_lock, IPL_NONE);
+       mtx_init(&i915->mm.obj_lock, IPL_TTY);
 
        init_llist_head(&i915->mm.free_list);
 
Index: i915_scheduler.c
===================================================================
RCS file: /cvs/src/sys/dev/pci/drm/i915/i915_scheduler.c,v
retrieving revision 1.4
diff -u -p -r1.4 i915_scheduler.c
--- i915_scheduler.c    14 Jan 2022 06:53:11 -0000      1.4
+++ i915_scheduler.c    27 Jan 2022 03:56:40 -0000
@@ -483,7 +483,7 @@ i915_sched_engine_create(unsigned int su
        INIT_LIST_HEAD(&sched_engine->requests);
        INIT_LIST_HEAD(&sched_engine->hold);
 
-       mtx_init(&sched_engine->lock, IPL_NONE);
+       mtx_init(&sched_engine->lock, IPL_TTY);
        lockdep_set_subclass(&sched_engine->lock, subclass);
 
        /*

Reply via email to