@xavpaice Is anyone else with the problem also using isdct (IntelĀ® SSD
Data Center Tool for NVMe)? I have not had the problem since disabling
it from running regularly. I was using it to check the wear level on
regular intervals before.
And the reason I thought of that was that finally one time I
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: linux-lts-xenial (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724173
We're also seeing this with 4.4.0-111-generic (on Trusty), and a very
similar hardware profile. The boxes in question are running Swift with
a large (millions) number of objects all approx 32k in size.
I'm currently fio'ing in a test environment to try to reproduce this
away from production.
--
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724173
Title:
bcache makes the whole io system hang after long run time
To manage notifications about this bug go
This one sets up a slower device which might be important according to IRC
discussions
http://paste.ubuntu.com/26063765/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724173
Title:
bcache makes
Hi there was a slight issue in the repro relying on sda5, I changed that to be
on shm as well.
2x800M are easy to spare. Hope it triggers with that as well.
# Get a 3G guest with full console logging
$ uvt-simplestreams-libvirt --verbose sync --source
http://cloud-images.ubuntu.com/daily
tested again with a better serial console setup, and here's the text
version like that screenshot:
[ 744.779536] Kernel panic - not syncing: stack-protector: Kernel stack is
corrupted in: c00d3ccc
[ 744.779536]
[ 744.780013] CPU: 0 PID: 5087 Comm: grep Not tainted 4.4.0-97-generic
Problem still exists on 4.4.0-97-generic. And in my attempt to reproduce
it on a VM, I got a corrupt kernel instead, which sounds worse (could
create some persistent damage, not just a hang).
Attached are the 3 scripts to reproduce the corrupt kernel, and an extra
one to attach if the first
** Attachment added: "2017-11-13 bcachecrash kernel panic
Screenshot_20171113_093118.png"
https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173/+attachment/5008127/+files/2017-11-13%20bcachecrash%20kernel%20panic%20Screenshot_20171113_093118.png
--
You received this bug
** Description changed:
I am using Ubuntu 14.04 (trusty) with the 4.4.x xenial kernel (the
trusty kernel is way easier to make bcache crash). I have mdadm raid1 on
/boot and /, backed by 2 SSDs.
I have XFS on 12 ceph directories (/var/lib/ceph/osd/ceph-*), which is
backed by bcache,
** Attachment added: "the latest crash, with the bcache on NVMe not shared by
mdadm or the OS"
https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173/+attachment/4973409/+files/2017-10-17-ceph4.kmsg.gz
--
You received this bug notification because you are a member of Ubuntu
** Attachment added: "2017-07-08-ceph5.kmsg.gz"
https://bugs.launchpad.net/ubuntu/+source/linux-lts-xenial/+bug/1724173/+attachment/4973408/+files/2017-07-08-ceph5.kmsg.gz
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
12 matches
Mail list logo