Hello,
looks like this is fixed in current kernel version in Bullseye.
Thanks for your help.
Package: src:linux
Version: 5.10.28-1
Severity: normal
Hello,
after upgrading some of our servers to Debian testing I noticed that
their reported disk utilization increased significantly. Since there was
no other indication of them being under load and since I had a prior
knowledge of
Package: src:linux
Version: 5.10.28-1
Severity: normal
Hello,
after upgrading some of our Qemu/KVM host servers to Debian testing I
noticed that their CPU load changed from "user" to "system" (observed
via Prometheus and Grafana.) After quick digging I found that the new
kernel is no longer
Package: linux-source-4.19
Version: 4.19.160-2
Followup-For: Bug #927184
We got hit by this bug after updating to most recent kernel version. In
our case it affected all disk drives, regardless of their type (rotating
or SSD.) After some research, we found out that the issue was triggered
by
Hello,
sorry for the delayed reply, apparently the only e-mail that was sent to
this address was the one about the bug being closed.
Tried the same thing as when I initially reported the bug - syncing
multiple RAID1 arrays while doing pvmove between them. No errors were
logged, everything
Package: linux-image-4.19.0-0.bpo.2-amd64
Version: 4.19.16-1~bpo9+1
When testing 4.19 from backports, we ran into an issue which manifests
under heavy I/O load. The driver gets stuck in endless resetting loop,
affecting I/O performance badly.
Affected server uses Supermicro H11DSi-NT
6 matches
Mail list logo