[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2024-04-27 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Wes Morgan  changed:

   What|Removed |Added

 CC||morg...@gmail.com

--- Comment #18 from Wes Morgan  ---
Exact same behavior for me with two WD SN770s running in a ZFS mirror, with no
apparent reason for triggering, but happened every boot. After reading this and
some discussions in the forums, I immediately replaced them with two Corsair
MP600 PROs and the problem vanished.

The two SN770s are still plugged into the device, but with the pool exported
and idle, with no errors.

A WD SN850X running as a root pool has no issues.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-12-30 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Mark Johnston  changed:

   What|Removed |Added

 CC||ma...@freebsd.org
 Status|New |Open

--- Comment #17 from Mark Johnston  ---
Just a "me too" on a newly built system with a SN770 drive (actually two but I
haven't tested the other one yet) used by ZFS.  My BIOS doesn't report any
problem with the 3.3V rail.  The PSU is brand new and I'm a bit reluctant to
replace it at this point.

The problem isn't reproducible for me but has been been happening overnight,
even when I'm not running and the system should be idle.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-07-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Thierry Thomas  changed:

   What|Removed |Added

   See Also||https://bugs.freebsd.org/bu
   ||gzilla/show_bug.cgi?id=2704
   ||09

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-07-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #16 from Thierry Thomas  ---
(In reply to Marcus Oliveira from comment #15)

See also
https://forums.tomshardware.com/threads/nvme-ssd-sometimes-does-not-boot-because-of-weak-psu.3692473/

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-07-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Marcus Oliveira  changed:

   What|Removed |Added

 CC||marcu...@gmail.com

--- Comment #15 from Marcus Oliveira  ---
Writing for someone who might come after me... After reading this last post
about 3.3V rail of the PSU, I changed the PSU and the problem with my NVME and
Freebsd disappeared. Funny enough, even before replacing the PSU the problem
wouldn't happen on Windows 11.

Marcus

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-26 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #14 from Timothy Guo  ---
(In reply to crb from comment #11)

I would like to share my follow up experience of this issue.

In short, the problem magically goes away after I wipe the disk and recreated
the pool from backup. The same system (hardware and SW) has been working
without issue for about half a year now. Unfortunately, I couldn't locate a
conclusive offender during the entire procedure.

One thing I would like to note also is the 3.3V rail of the PSU. When I was
still suffering from the issue, I also discovered 3.3V rail under-voltage,
probably thanks to the hint from @crb's bug. I first read the out of range
Voltage value from BIOS, and then confirmed the issue through direct
measurement with a Voltage meter directly from the PSU pin-out. So it's true
that the issue could really be power related. But it's unfortunate that I can't
tell who is the offender, is the NVME drawing too much power due to firmware
bug? Or is a failing PSU leading to NVME failure?

I contacted my PSU vendor and got the feedback that the wire connector may be
aged and increased the resistance. Maybe my Voltage measuring attempt fixed the
wiring connection, maybe the wipe-out and rebuild worked-around a potential
firmware bug. The issue just suddenly goes away, as it suddenly comes (Note: I
couldn't remember any re-assembling of the hardware build when it suddenly
comes, though.)

The only part that I'm sure is the power failure is real and highly related. A
stronger PSU might have simply avoided the problem altogether?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Ed Maste  changed:

   What|Removed |Added

   See Also|https://bugs.freebsd.org/bu |
   |gzilla/show_bug.cgi?id=2641 |
   |41  |

--- Comment #13 from Ed Maste  ---
(In reply to crb from comment #11)
Thanks for adding the followup and it seems I was probably hasty in adding the
see-also. I'll remove it.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #12 from Lucas Holt  ---
It appears this is a problem with the WD SN770 in general.
https://github.com/openzfs/zfs/discussions/14793

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

crb  changed:

   What|Removed |Added

 CC||c...@chrisbowman.com

--- Comment #11 from crb  ---
It looks like ed posted a comment referring to a similar bug I had submitted.
The TLDR of that bug was that I was a 3.3v rail that was out of spec (as
reported by the bios) and I replaced the power supply, the BIOS reported a
compliant value for the 3.3 rail and that system has been ROCK solid ever since
with the same SSD.  I'm not suggesting that is your issue, but I'm trying to
save people time reading my bug report and determining what the problem was on
that one.
Christopher

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Ed Maste  changed:

   What|Removed |Added

   See Also||https://bugs.freebsd.org/bu
   ||gzilla/show_bug.cgi?id=2641
   ||41

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2023-06-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Lucas Holt  changed:

   What|Removed |Added

 CC||l...@foolishgames.com

--- Comment #10 from Lucas Holt  ---
I've had the same issue on FreeBSD 13.1 release.  Drive worked fine for about 2
months. The zfs pool completely flipped out (one device only) and system had to
be powered off. reboot, zpool status and other commands would just hang.

WD_BLACK™ SN770 NVMe™ SSD - 2TB
SKU: WDS200T3X0E 

nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme0: GET LOG PAGE (02) sqid:0 cid:0 nsid: cdw10:007f0002
cdw11:
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
Solaris: WARNING: Pool 'vm' has encountered an uncorrectable I/O failure and
has been suspended.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-10-08 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

bibi  changed:

   What|Removed |Added

 CC||gnub...@gmail.com

--- Comment #9 from bibi  ---
Hello,

I have same error on a FreeBSD 13-1

8<-
Oct  8 15:47:09 donald kernel: nvme0: WRITE sqid:2 cid:0 nsid:1 lba:88256808
len:24
Oct  8 15:47:09 donald kernel: nvme0: resubmitting queued i/o
Oct  8 15:47:09 donald kernel: nvme0: WRITE sqid:2 cid:0 nsid:1 lba:212904296
len:8
Oct  8 15:47:09 donald kernel: nvme0: resubmitting queued i/o
Oct  8 15:47:09 donald kernel: nvme0: WRITE sqid:2 cid:0 nsid:1 lba:213705328
len:8
8<-

Found also in log :

8<-
Oct  7 23:49:21 donald kernel: nvme0: RECOVERY_START 6604360949574 vs
6603365139954
Oct  7 23:49:21 donald kernel: nvme0: timeout with nothing complete, resetting
Oct  7 23:49:21 donald kernel: nvme0: Resetting controller due to a timeout.
Oct  7 23:49:21 donald kernel: nvme0: RECOVERY_WAITING
Oct  7 23:49:21 donald kernel: nvme0: resetting controller
Oct  7 23:49:21 donald kernel: nvme0: aborting outstanding i/o
8<-

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-10-03 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #8 from Timothy Guo  ---
(In reply to Ian Brennan from comment #6)
Was that a migration from TrueNAS core to Scale? Did you run into any
regression specific to TrueNAS Scale? I heard Scale is less mature than Core.

My SW setup used to be stable until I added that NVME disk as an upgrade. I
really wish I can avoid any SW migration since the SW setup has been there
working just fine for 8+ years with quite a few weird customization -- XEN
virtualization, LXC container in dom0 served by TrueNAS core as a XEN guest.

XEN virtualization and PCI passthrough could be a factor in this problem but so
far I haven't found any evidence yet... My current top suspect is still the
APST feature in FreeBSD driver...

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-10-03 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #7 from Timothy Guo  ---
(In reply to Timothy Guo from comment #5)
I'm back reporting that the problem comes back just now (2 hours ago according
to the alert mail) for no obvious reason -- I'm not actively using that server
at this moment. The same nvme controller timeout reset shows up in kernel log
and I loss access to the ZFS pool on it.

The disk itself seems to work well physically so I'm not sure if I can ask for
refund or any service. On the other hand, once the problem shows up, it appears
to affect both Linux && FreeBSD running on the same physical box. Maybe
firmware bug, maybe driver issue, maybe both...

I used to suspect the problem is APST related, but I have no way to play with
this config in FreeBSD. There is no mention of this term in the FreeBSD world.
There is no user land tools that can manipulate or inspect the status of the
APST related feature setting. It's kind of surprising since this feature had
bad fame in the Linux world.

Is there anybody who can help me at least do some diagnose on this problem? Is
it feasible to manually parse the PCI config space to determine the APST
status? I'll need some guide for this though...

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-09-24 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #6 from Ian Brennan  ---
I had the same problem basically, I gave up and installed TrueNAS Scale, Debian
seemed to solve the problem.  SSDs on FreeBSD just seemed way too unstable.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-09-24 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #5 from Timothy Guo  ---
(In reply to Timothy Guo from comment #4)

The disk appears to be running in the freeNAS box just fine once again.
I'll need to monitor the status for a longer while since it takes about one day
for the issue to reproduce for the 2nd time, even though it repeats soon after
each reboot from then on...

Apologize if this turns out to be a noise.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-09-24 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

--- Comment #4 from Timothy Guo  ---
(In reply to Timothy Guo from comment #3)
My SN550 was brought into a pretty bad state that it triggers AER flood on a
Linux system and the BIOS was also affected that I can't even enter the BIOS
setup page.

As a desperate last resort, I move the disk into another machine (a NUC box).
Magically the NUC box recognized the broken disk without problem, and booted
just fine (Actually I lied, it landed on a prompt saying it's a FreeNAS data
disk, which at least proves the disk can read just fine).

Now after I moved it back to my home server box, the disk backs to live once
again. I'm able to scrub the ZFS pool on this disk and find no data error at
all.

This time, I didn't run into any issue with APST enabled or disabled, which is
really surprising.

Let me switch back to the offending FreeNAS system to see how it behaves this
time. Is this really a physical connection problem? How could it be -- I didn't
use any adapter cable but directly hook it onto the M.2 slot on my motherboard.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-09-24 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Timothy Guo  changed:

   What|Removed |Added

 CC||firemeteor@users.sourceforg
   ||e.net

--- Comment #3 from Timothy Guo  ---
I have an WD SN570 1TB NVME drive which suddenly run into this controller reset
problem after about 2 months of active usage. Once this problem shows up it
reproduces on every reboot.

I'm still trying to confirm if this is an OS driver issue or disk issue.
The disk appears to behave differently on different OS. When I switched to
Linux, the same problematic disk appears to react to different APST config and
I was able to get it pass some short read-only tests (DD, disk-wide find, grep
in kernel tree etc.) with APST disabled. 
No matter if this observation is real or not, it encouraged me to switch back
to my FreeBSD box and try ZFS scrub. Unfortunately the disk fails terribly this
time and I couldn't even get it back to work in Linux as the drive appears to
get stuck in some low power state... Will try dig deeper to see if there is
anything I can do to get it back.


BTW, the SMART log does not report any error for this drive...

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-04-02 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Graham Perrin  changed:

   What|Removed |Added

 CC||grahamper...@gmail.com

--- Comment #2 from Graham Perrin  ---
(In reply to Ian Brennan from comment #0)

Is there a corresponding report in the TrueNAS area?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-04-01 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Tomasz "CeDeROM" CEDRO  changed:

   What|Removed |Added

 CC||to...@cedro.info

--- Comment #1 from Tomasz "CeDeROM" CEDRO  ---
Hello world :-)

Just saw previous and this report. I also switched to M2.NVM Samsung SSD 980
1TB drive over ICYBOX IB-PCI224M2-ARGB PCI-E 4.0 controller on my desktop but
all seems to work fine. Just a reference, may help somehow :-)

nvme0:  mem 0xfe60-0xfe603fff at device 0.0 on pci5
nvme0: Allocated 64MB host memory buffer
nvd0:  NVMe namespace
nvd0: 953869MB (1953525168 512 byte sectors)

FreeBSD hexagon 13.1-STABLE FreeBSD 13.1-STABLE #0
stable/13-n250096-4f69c575996: Fri Mar 25 03:50:58 CET 2022
root@hexagon:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 262969] NVMe - Resetting controller due to a timeout and possible hot unplug

2022-03-31 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969

Bug ID: 262969
   Summary: NVMe - Resetting controller due to a timeout and
possible hot unplug
   Product: Base System
   Version: 13.1-STABLE
  Hardware: amd64
OS: Any
Status: New
  Severity: Affects Some People
  Priority: ---
 Component: kern
  Assignee: b...@freebsd.org
  Reporter: ibren...@netgrade.com

I am seeing very unstable NVMe on TrueNAS 12 and 13.  I'm using Western Digital
Black NVMe m.2 SSD's.  The exact version I'm using is WD_BLACK SN770 250GB &
2TB, firmware version 731030WD.

This is on TrueNAS version: TrueNAS-12.0-U8 / FreeBSD: 12.2-RELEASE-p12 (I
later upgraded to 13 Beta, see below)

I set hw.nvme.per_cpu_io_queues=0, and it did not fix the problem, in fact it
seems to have made it much more frequent, although I'm not 100% sure about
that, need to test again.

I also tried using the nvd driver with hw.nvme.use_nvd=0, which doesn't seem to
make a difference, however it had slightly different results in the log when
the issue happened again.  See logs below, would be grateful if somebody can
help with this problem.


Mar 29 21:42:25 truenas nvme5: Resetting controller due to a timeout and
possible hot unplug.
Mar 29 21:42:25 truenas nvme5: resetting controller
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:12 cid:120 nsid:1 lba:1497544880
len:16
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:12 cid:120
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:12 cid:123 nsid:1 lba:198272936 len:16
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:12 cid:123
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:13 cid:121 nsid:1 lba:431014528 len:24
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:13 cid:121
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:15 cid:127 nsid:1 lba:864636432 len:8
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:15 cid:127
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:16 cid:126 nsid:1 lba:2445612184 len:8
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:16 cid:126
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:16 cid:120 nsid:1 lba:430503600 len:8
Mar 29 21:42:25 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:16 cid:120
cdw0:0
Mar 29 21:42:25 truenas nvme5: failing outstanding i/o
Mar 29 21:42:25 truenas nvme5: READ sqid:18 cid:123 nsid:1 lba:1499051024 len:8
Mar 29 21:42:26 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:18 cid:123
cdw0:0
Mar 29 21:42:26 truenas nvme5: failing outstanding i/o
Mar 29 21:42:26 truenas nvme5: WRITE sqid:18 cid:124 nsid:1 lba:1990077368
len:8
Mar 29 21:42:26 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:18 cid:124
cdw0:0
Mar 29 21:42:26 truenas nvme5: failing outstanding i/o
Mar 29 21:42:26 truenas nvme5: READ sqid:19 cid:122 nsid:1 lba:1237765696 len:8
Mar 29 21:42:26 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:19 cid:122
cdw0:0
Mar 29 21:42:26 truenas nvme5: failing outstanding i/o
Mar 29 21:42:26 truenas nvme5: READ sqid:19 cid:125 nsid:1 lba:180758264 len:16
Mar 29 21:42:26 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:19 cid:125
cdw0:0
Mar 29 21:42:26 truenas nvme5: failing outstanding i/o
Mar 29 21:42:26 truenas nvme5: READ sqid:20 cid:121 nsid:1 lba:2445612192 len:8
Mar 29 21:42:26 truenas nvme5: ABORTED - BY REQUEST (00/07) sqid:20 cid:121
cdw0:0
Mar 29 21:42:26 truenas nvd5: detached





nvme3: Resetting controller due to a timeout and possible hot unplug.
nvme3: resetting controller
nvme3: failing outstanding i/o
nvme3: READ sqid:7 cid:127 nsid:1 lba:419546528 len:8
nvme3: ABORTED - BY REQUEST (00/07) sqid:7 cid:127 cdw0:0
nvme3: (nda3:nvme3:0:0:1): READ. NCB: opc=2 fuse=0 nsid=1 prp1=0 prp2=0
cdw=1901c5a0 0 7 0 0 0
failing outstanding i/o
(nda3:nvme3:0:0:1): CAM status: CCB request completed with an error
(nda3:nvme3:0:0:1): Error 5, Retries exhausted
nvme3: READ sqid:11 cid:127 nsid:1 lba:782841288 len:8
nvme3: ABORTED - BY REQUEST (00/07) sqid:11 cid:127 cdw0:0
nvme3: (nda3:nvme3:0:0:1): READ. NCB: opc=2 fuse=0 nsid=1 prp1=0 prp2=0
cdw=2ea935c8 0 7 0 0 0
failing outstanding i/o
nvme3: READ sqid:11 cid:123 nsid:1 lba:704576056 len:8
nvme3: ABORTED - BY REQUEST (00/07) sqid:11 cid:123 cdw0:0
nvme3: failing outstanding i/o
nvme3: WRITE sqid:12 cid:127 nsid:1 lba:1016402352 len:8
nvme3: ABORTED - BY REQUEST (00/07) sqid:12 cid:127 cdw0:0
nvme3: failing outstanding i/o
nvme3: READ sqid:12 cid:125 nsid:1 lba:1824854760 len:8
nvme3: ABORTED - BY REQUEST (00/07) sqid:12 cid:125 cdw0:0
nvme3: failing outstanding i/o
nvme3: