Artful SRU request: https://lists.ubuntu.com/archives/kernel-team/2018-May/092160.html
Xenial SRU request: https://lists.ubuntu.com/archives/kernel-team/2018-May/092161.html ** Description changed: + == SRU Justification == + IBM reported this bug due to a regression introduced by mainline commit + 94232a4332de. IBM has requested this SAUCE backport to resolve this + regression in Artful and Xenial. + + With Bionic and v4.15, the rwlock code has been rewritten. See upstream gitcommit: + eb3b7b848fb3 ("s390/rwlock: introduce rwlock wait queueing"). + + Since the upstream code has been rewritten there also won't be an upstream + git commit id available which contains the attached fix. + + == Fix == + UBUNTU: SAUCE: (no-up) s390: fix rwlock implementation + + == Regression Potential == + Low. The backport was written and tested by IBM. It is specific to s390. + + == Test Case == + A test kernel was built with this patch and tested by the original bug reporter. + The bug reporter states the test kernel resolved the bug. + + Description: kernel: fix rwlock implementation Symptom: Kernel hangs, due to deadlock on an rwlock. Problem: With upstream commit 94232a4332de ("s390/rwlock: improve writer - fairness") rwlock writer fairness was supposed to be - implemented. If a writer tries to take an rwlock it sets - unconditionally the writer bit within the lock word and waits - until all readers have released the lock. This however can lead - to a deadlock since rwlocks can be taken recursively by readers. - If e.g. CPU 0 holds the lock as a reader, and CPU 1 wants to - write-lock the lock, then CPU 1 sets the writer bit and - afterwards busy waits for CPU 0 to release the lock. If now CPU 0 - tries to read-lock the lock again (recursively) it will also busy - wait until CPU 1 removes the writer bit, which will never happen, - since it waits for the first reader on CPU 0 to release the lock. + fairness") rwlock writer fairness was supposed to be + implemented. If a writer tries to take an rwlock it sets + unconditionally the writer bit within the lock word and waits + until all readers have released the lock. This however can lead + to a deadlock since rwlocks can be taken recursively by readers. + If e.g. CPU 0 holds the lock as a reader, and CPU 1 wants to + write-lock the lock, then CPU 1 sets the writer bit and + afterwards busy waits for CPU 0 to release the lock. If now CPU 0 + tries to read-lock the lock again (recursively) it will also busy + wait until CPU 1 removes the writer bit, which will never happen, + since it waits for the first reader on CPU 0 to release the lock. Solution: Revert the rwlock writer fairness semantics again. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1761674 Title: [Ubuntu 16.04] kernel: fix rwlock implementation To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-z-systems/+bug/1761674/+subscriptions -- ubuntu-bugs mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
