[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2021-06-23 Thread Jennifer Duong
** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2021-06-23 Thread Jennifer Duong
After upgrading to kernel-5.4.0-74-generic, the Qlogic direct-connect
host is able to discover both SCSI-FC and NVMe/FC devices. See attached
logs.

** Attachment added: "syslog"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+attachment/5506491/+files/syslog

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929599] Re: nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp devices

2021-05-25 Thread Jennifer Duong
** Package changed: ubuntu => nvme-cli (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929599

Title:
  nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp
  devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1929599/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929599] Re: nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp devices

2021-05-25 Thread Jennifer Duong
** Attachment added: "apport.nvme-cli.w9v9p8jp.apport"
   
https://bugs.launchpad.net/ubuntu/+bug/1929599/+attachment/5500304/+files/apport.nvme-cli.w9v9p8jp.apport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929599

Title:
  nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp
  devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1929599/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929599] [NEW] nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp devices

2021-05-25 Thread Jennifer Duong
Public bug reported:

Can Ubuntu 20.04 pull in the following git commit?

https://github.com/linux-nvme/nvme-
cli/commit/6eafcf96f315d6ae7be5fa8332131c4cc487d5df

NetApp devices should be running with the round-robin I/O policy as
opposed to the default, numa, whenever native NVMe multipathing is
enabled. This patch resolves this.

** Affects: ubuntu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929599

Title:
  nvmf-autoconnect: add udev rules to set iopolicy for certain NetApp
  devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1929599/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1920991] Re: Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting controller

2021-05-18 Thread Jennifer Duong
I spoke with a few of our controller firmware developers and it sounds
like the controllers are returning the proper status whenever a single
controller is reset. However, it appears as if Ubuntu may not have
retried IO on the alternate path.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920991

Title:
  Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting
  controller

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2021-04-09 Thread Jennifer Duong
I can now discover NVMe/FC devices on my Qlogic direct connect system
after upgrading to kernel-5.4.0-70-generic. nvme-cli-1.9-1 is installed.

root@ICTM1608S01H4:~# nvme list
Node SN   Model
Namespace Usage  Format   FW Rev
   
- --  
/dev/nvme0n1 721838500080 NetApp E-Series  
100 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n10721838500080 NetApp E-Series  
109 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n11721838500080 NetApp E-Series  
110 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n12721838500080 NetApp E-Series  
111 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n13721838500080 NetApp E-Series  
112 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n14721838500080 NetApp E-Series  
113 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n15721838500080 NetApp E-Series  
114 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n16721838500080 NetApp E-Series  
115 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n17721838500080 NetApp E-Series  
116 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n18721838500080 NetApp E-Series  
117 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n19721838500080 NetApp E-Series  
118 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n2 721838500080 NetApp E-Series  
101 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n20721838500080 NetApp E-Series  
119 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n21721838500080 NetApp E-Series  
120 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n22721838500080 NetApp E-Series  
121 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n23721838500080 NetApp E-Series  
122 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n24721838500080 NetApp E-Series  
123 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n25721838500080 NetApp E-Series  
124 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n26721838500080 NetApp E-Series  
125 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n27721838500080 NetApp E-Series  
126 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n28721838500080 NetApp E-Series  
127 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n29721838500080 NetApp E-Series  
128 2.20  TB /   2.20  TB512   B +  0 B   88714915
/dev/nvme0n3 721838500080 NetApp E-Series  
102 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n30721838500080 NetApp E-Series  
12910.74  GB /  10.74  GB512   B +  0 B   88714915
/dev/nvme0n31721838500080 NetApp E-Series  
13010.74  GB /  10.74  GB512   B +  0 B   88714915
/dev/nvme0n32721838500080 NetApp E-Series  
13110.74  GB /  10.74  GB512   B +  0 B   88714915
/dev/nvme0n4 721838500080 NetApp E-Series  
103 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n5 721838500080 NetApp E-Series  
104 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n6 721838500080 NetApp E-Series  
105 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n7 721838500080 NetApp E-Series  
106 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n8 721838500080 NetApp E-Series  
107 4.29  GB /   4.29  GB512   B +  0 B   88714915
/dev/nvme0n9 721838500080 NetApp E-Series  
108 4.29  GB /   4

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2021-04-08 Thread Jennifer Duong
Dan, I've attached /var/log/syslog and journalctl logs of a recreate
after installing nvme-cli_1.9-1ubuntu0.1+bug1874270v20210408b2_amd64 and
rebooting the host.

Apr  8 14:44:00 ICTM1608S01H1 root: JD: Resetting controller B
Apr  8 14:44:09 ICTM1608S01H1 kernel: [  196.190003] lpfc :af:00.1: 
5:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr  8 14:44:09 ICTM1608S01H1 kernel: [  196.190082] nvme nvme2: NVME-FC{2}: 
controller connectivity lost. Awaiting Reconnect
Apr  8 14:44:09 ICTM1608S01H1 kernel: [  196.190176] lpfc :18:00.1: 
1:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr  8 14:44:09 ICTM1608S01H1 kernel: [  196.190268] nvme nvme6: NVME-FC{6}: 
controller connectivity lost. Awaiting Reconnect
Apr  8 14:44:09 ICTM1608S01H1 kernel: [  196.211805] nvme nvme2: NVME-FC{2}: io 
failed due to lldd error 6
Apr  8 14:44:09 ICTM1608S01H1 systemd[1]: Started NVMf auto-connect scan upon 
nvme discovery controller Events.
Apr  8 14:44:09 ICTM1608S01H1 systemd[1]: 
nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service:
 Succeeded.
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2827]: 
filp(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2828]: 
dentry(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2827]: 
pid(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2828]: 
inode_cache(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2829]: 
kmalloc-rcl-512(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2829]: 
PING(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2828]: 
skbuff_head_cache(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2827]: 
kmalloc-1k(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2830]: 
sock_inode_cache(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 ICTM1608S01H1 systemd-udevd[2829]: 
kmalloc-64(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x2090fadcc5ce:pn\x2d0x1090fadcc5ce.service):
 Failed to process device, ignoring: File name too long
Apr  8 14:44:09 IC

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2021-04-08 Thread Jennifer Duong
Dan, I've attached /var/log/syslog and journalctl logs of a recreate
after installing nvme-cli_1.9-1ubuntu0.1+bug1874270v20210408b1_amd64 and
rebooting the host. It looks like connect-all didn't recognize the
"matching" flag.

Apr  8 11:48:45 ICTM1608S01H1 root: JD: Resetting controller B
Apr  8 11:49:39 ICTM1608S01H1 kernel: [  545.652088] lpfc :af:00.1: 
5:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr  8 11:49:39 ICTM1608S01H1 kernel: [  545.652166] nvme nvme2: NVME-FC{2}: 
controller connectivity lost. Awaiting Reconnect
Apr  8 11:49:39 ICTM1608S01H1 kernel: [  545.652203] lpfc :18:00.1: 
1:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr  8 11:49:39 ICTM1608S01H1 kernel: [  545.652276] nvme nvme6: NVME-FC{6}: 
controller connectivity lost. Awaiting Reconnect
Apr  8 11:49:39 ICTM1608S01H1 kernel: [  545.673853] nvme nvme2: NVME-FC{2}: io 
failed due to lldd error 6
Apr  8 11:49:39 ICTM1608S01H1 systemd[1]: Started NVMf auto-connect scan upon 
nvme discovery controller Events.
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]: connect-all: unrecognized option 
'--matching'
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]: Discover NVMeoF subsystems and 
connect to them  [  --transport=, -t  ]--- transport type
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --traddr=, -a  ] 
  --- transport address
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --trsvcid=, -s  ]
  --- transport service id (e.g. IP
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  port)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --host-traddr=, -w  
]  --- host traddr (e.g. FC WWN's)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --hostnqn=, -q  ]
  --- user-defined hostnqn (if default
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  not used)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --hostid=, -I  ] 
  --- user-defined hostid (if default
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  not used)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --raw=, -r  ]
  --- raw output file
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --device=, -d  ] 
  --- use existing discovery controller
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  device
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --keep-alive-tmo=, -k 
 ] --- keep alive timeout period in
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  seconds
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --reconnect-delay=, -c 
 ] --- reconnect timeout period in
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  seconds
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --ctrl-loss-tmo=, -l 
 ] --- controller loss timeout period in
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  seconds
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --hdr_digest, -g ]   
  --- enable transport protocol header
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  digest (TCP transport)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --data_digest, -G ]  
  --- enable transport protocol data
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  digest (TCP transport)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --nr-io-queues=, -i  
] --- number of io queues to use
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  (default is core count)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --nr-write-queues=, -W 
 ] --- number of write queues to use
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  (default 0)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --nr-poll-queues=, -P 
 ] --- number of poll queues to use
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  (default 0)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --queue-size=, -Q  ] 
  --- number of io queue elements to
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   
  use (default 128)
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --persistent, -p ]   
  --- persistent discovery connection
Apr  8 11:49:39 ICTM1608S01H1 nvme[7329]:   [  --quiet, -Q ]
  --- suppress already connected errors

** Attachment added: "nvme-cli-1.9-1ubuntu0.1+bug1874270v20210408b1-logs.zip"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+attachment/5485673/+files/nvme-cli-1.9-1ubuntu0.1+bug1874270v20210408b1-logs.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.laun

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2021-04-07 Thread Jennifer Duong
Dan, where do I change the kernel rport timeout? And how can I go about
changing the timeout on a server with Emulex cards installed versus
Qlogic?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2021-04-07 Thread Jennifer Duong
At the time in which a storage controller is failed, /var/log/syslog and
journalctl look identical:

Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.657080] lpfc :af:00.1: 
5:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.657268] lpfc :18:00.1: 
1:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.658064] nvme nvme5: NVME-FC{4}: 
controller connectivity lost. Awaiting Reconnect
Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.659036] nvme nvme1: NVME-FC{0}: 
controller connectivity lost. Awaiting Reconnect
Apr  7 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x202200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x2090fadcc5ce:pn-0x1090fadcc5ce.service'
 failed with exit code 1.
Apr  7 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x202200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x20109b8f2b8e:pn-0x10109b8f2b8e.service'
 failed with exit code 1.
Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.680671] nvme nvme5: NVME-FC{4}: 
io failed due to lldd error 6
Apr  7 11:45:28 ICTM1608S01H1 kernel: [586649.703918] nvme nvme1: NVME-FC{0}: 
io failed due to lldd error 6
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.469693] lpfc :af:00.0: 
4:(0):6172 NVME rescanned DID x011400 port_state x2
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.469715] lpfc :18:00.0: 
0:(0):6172 NVME rescanned DID x011400 port_state x2
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.470629] nvme nvme4: NVME-FC{1}: 
controller connectivity lost. Awaiting Reconnect
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.471611] nvme nvme8: NVME-FC{5}: 
controller connectivity lost. Awaiting Reconnect
Apr  7 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x2090fadcc5cd:pn-0x1090fadcc5cd.service'
 failed with exit code 1.
Apr  7 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x20109b8f2b8d:pn-0x10109b8f2b8d.service'
 failed with exit code 1.
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.493222] nvme nvme4: NVME-FC{1}: 
io failed due to lldd error 6
Apr  7 11:45:29 ICTM1608S01H1 kernel: [586650.516848] nvme nvme8: NVME-FC{5}: 
io failed due to lldd error 6
Apr  7 11:45:59 ICTM1608S01H1 kernel: [586680.663369]  rport-10:0-9: blocked FC 
remote port time out: removing rport
Apr  7 11:45:59 ICTM1608S01H1 kernel: [586680.663373]  rport-16:0-9: blocked FC 
remote port time out: removing rport
Apr  7 11:45:59 ICTM1608S01H1 kernel: [586680.663377]  rport-15:0-9: blocked FC 
remote port time out: removing rport
Apr  7 11:45:59 ICTM1608S01H1 kernel: [586680.663383]  rport-12:0-9: blocked FC 
remote port time out: removing rport
Apr  7 11:46:28 ICTM1608S01H1 kernel: [586709.847350] nvme nvme5: NVME-FC{4}: 
dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr  7 11:46:28 ICTM1608S01H1 kernel: [586709.847363] nvme nvme5: Removing 
ctrl: NQN "nqn.1992-08.com.netapp:5700.600a098000d8580e5c0136a2"
Apr  7 11:46:28 ICTM1608S01H1 kernel: [586709.847385] nvme nvme1: NVME-FC{0}: 
dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr  7 11:46:28 ICTM1608S01H1 kernel: [586709.847395] nvme nvme1: Removing 
ctrl: NQN "nqn.1992-08.com.netapp:5700.600a098000d8580e5c0136a2"
Apr  7 11:46:29 ICTM1608S01H1 kernel: [586710.615343] nvme nvme4: NVME-FC{1}: 
dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr  7 11:46:29 ICTM1608S01H1 kernel: [586710.615357] nvme nvme4: Removing 
ctrl: NQN "nqn.1992-08.com.netapp:5700.600a098000d8580e5c0136a2"
Apr  7 11:46:29 ICTM1608S01H1 kernel: [586710.615375] nvme nvme8: NVME-FC{5}: 
dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr  7 11:46:29 ICTM1608S01H1 kernel: [586710.615389] nvme nvme8: Removing 
ctrl: NQN "nqn.1992-08.com.netapp:5700.600a098000d8580e5c0136a2"
Apr  7 11:47:07 ICTM1608S01H1 systemd-udevd[2896874]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x2090fadcc5cd:pn-0x1090fadcc5cd.service'
 failed with exit code 1.
Apr  7 11:47:07 ICTM1608S01H1 systemd-udevd[2896874]: fc_udev_device: Process 
'systemctl --no-block start 
nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trs

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2021-04-05 Thread Jennifer Duong
Dan, where can I find the location of these logs?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1920991] Re: Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting controller

2021-03-30 Thread Jennifer Duong
All four Ubuntu 20.04 hosts detect an I/O error almost immediately after
my E-Series storage controller is reset. This was triggered via SMcli,
but I imagine the same behavior can be reproduced by resetting the
controller using SANtricity Storage Manager. Smash is a tool that we use
to read and write I/O.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920991

Title:
  Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting
  controller

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873952] Re: Call trace during manual controller reset on NVMe/RoCE array

2021-03-29 Thread Jennifer Duong
This call trace is also seen while manually resetting a NVIDIA Mellanox
InfiniBand Switch that is connected to an NVMe/IB EF600 storage array.
The server has an MCX354A-FCBT installed running with FW 2.42.5000. The
system is connected to a QM8700 and SB7800. Both switches are running
with MLNX OS 3.9.2110. The message logs have been attached.

** Attachment added: "ICTM1605S01H4-switch-port-fail"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1873952/+attachment/5482212/+files/ICTM1605S01H4-switch-port-fail

** Summary changed:

- Call trace during manual controller reset on NVMe/RoCE array
+ Call trace during manual controller reset on NVMe/RoCE array and switch reset 
on NVMe/IB array

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873952

Title:
  Call trace during manual controller reset on NVMe/RoCE array and
  switch reset on NVMe/IB array

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1873952/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1920991] Re: Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting controller

2021-03-23 Thread Jennifer Duong
Host I/O logs

** Attachment added: "ICTM1605S01-smash.zip"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+attachment/5480292/+files/ICTM1605S01-smash.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920991

Title:
  Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting
  controller

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1920991] [NEW] Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting controller

2021-03-23 Thread Jennifer Duong
Public bug reported:

On all four of my Ubuntu 20.04 hosts, an I/O error is detected almost
immediately after my E-Series storage controller. I am currently running
with Ubuntu 20.04, kernel-5.4.0-67-generic, rdma-core-28.0-1ubuntu1,
nvme-cli-1.9-1ubuntu0.1, and native NVMe multipathing enabled. These
message appear to coincide with when my test fails:

Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.616408] blk_update_request: I/O 
error, dev nvme0c0n12, sector 289440 op 0x1:(WRITE) flags 0x400c800 phys_seg 6 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.616433] blk_update_request: I/O 
error, dev nvme0c0n12, sector 291488 op 0x1:(WRITE) flags 0x4008800 phys_seg 
134 prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.617137] blk_update_request: I/O 
error, dev nvme0c0n12, sector 295048 op 0x1:(WRITE) flags 0x4008800 phys_seg 87 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.617184] blk_update_request: I/O 
error, dev nvme0c0n12, sector 293000 op 0x1:(WRITE) flags 0x400c800 phys_seg 
180 prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.617624] blk_update_request: I/O 
error, dev nvme0c0n12, sector 298608 op 0x1:(WRITE) flags 0x4008800 phys_seg 47 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.617678] blk_update_request: I/O 
error, dev nvme0c0n12, sector 296560 op 0x1:(WRITE) flags 0x400c800 phys_seg 62 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.618070] blk_update_request: I/O 
error, dev nvme0c0n12, sector 302160 op 0x1:(WRITE) flags 0x4008800 phys_seg 24 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.618084] blk_update_request: I/O 
error, dev nvme0c0n12, sector 300112 op 0x1:(WRITE) flags 0x400c800 phys_seg 47 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.618497] blk_update_request: I/O 
error, dev nvme0c0n12, sector 305712 op 0x1:(WRITE) flags 0x4008800 phys_seg 25 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.618521] blk_update_request: I/O 
error, dev nvme0c0n12, sector 303664 op 0x1:(WRITE) flags 0x400c800 phys_seg 63 
prio class 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.640763] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641099] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641305] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641317] 
ldm_validate_partition_table(): Disk read failed.
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641551] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641751] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.641955] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.642160] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.642172] Dev nvme0n12: unable to 
read RDB block 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.642394] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.642600] Buffer I/O error on dev 
nvme0n12, logical block 3, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.642802] Buffer I/O error on dev 
nvme0n12, logical block 0, async page read
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.643015] nvme0n12: unable to read 
partition table
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.653495] 
ldm_validate_partition_table(): Disk read failed.
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.654188] Dev nvme0n20: unable to 
read RDB block 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.654850] nvme0n20: unable to read 
partition table
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.665151] 
ldm_validate_partition_table(): Disk read failed.
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.665673] Dev nvme0n126: unable to 
read RDB block 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.666194] nvme0n126: unable to read 
partition table
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.685662] 
ldm_validate_partition_table(): Disk read failed.
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.686504] Dev nvme0n124: unable to 
read RDB block 0
Mar 23 12:23:58 ICTM1605S01H1 kernel: [ 1232.687187] nvme0n124: unable to read 
partition table

** Affects: nvme-cli (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "apport.nvme-cli.0c_6er3t.apport"
   
https://bugs.launchpad.net/bugs/1920991/+attachment/5480283/+files/apport.nvme-cli.0c_6er3t.apport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920991

Title:
  Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting
  controller

To manage notifications about this bug go to:
https

[Bug 1920991] Re: Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting controller

2021-03-23 Thread Jennifer Duong
Host message logs

** Attachment added: "ICTM1605S01-messages.zip"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+attachment/5480289/+files/ICTM1605S01-messages.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920991

Title:
  Ubuntu 20.04 - NVMe/IB I/O error detected while manually resetting
  controller

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1920991/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-06-09 Thread Jennifer Duong
I am attempting to install Ubuntu 20.04 LTS to my SANboot LUN before
upgrading to the kernel provided, but it appears that the installation
fails at the "updating initramfs configuration step".

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-06-09 Thread Jennifer Duong
** Attachment added: "Ubuntu-20-04-LTS-SANboot-Fail.zip"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+attachment/5382025/+files/Ubuntu-20-04-LTS-SANboot-Fail.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2020-05-22 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-05-22 Thread Jennifer Duong
I don't believe so as I am able to discover SCSI devices on my Emulex
fabric-attached, Emulex direct-connect, and Qlogic fabric-attached hosts
before starting this test.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-05-22 Thread Jennifer Duong
I believe it's a bug with the inbox Qlogic driver

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-05-22 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874336] Re: NVMe/RoCE I/O QID timeout during change volume ownership

2020-05-21 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874336

Title:
  NVMe/RoCE I/O QID timeout during change volume ownership

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-05-20 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873952] Re: Call trace during manual controller reset on NVMe/RoCE array

2020-05-20 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873952

Title:
  Call trace during manual controller reset on NVMe/RoCE array

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1873952/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867366] Re: hostnqn fails to automatically generate after installing nvme-cli

2020-05-20 Thread Jennifer Duong
I am still seeing this with Ubuntu 20.04 LTS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867366

Title:
  hostnqn fails to automatically generate after installing nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1867366/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874336] Re: NVMe/RoCE I/O QID timeout during change volume ownership

2020-04-24 Thread Jennifer Duong
As a note, I am running without DA

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874336

Title:
  NVMe/RoCE I/O QID timeout during change volume ownership

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874336] Re: NVMe/RoCE I/O QID timeout during change volume ownership

2020-04-22 Thread Jennifer Duong
Message logs attached.

** Attachment added: "ICTM1611S01-messages-4-20-20.zip"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+attachment/5358157/+files/ICTM1611S01-messages-4-20-20.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874336

Title:
  NVMe/RoCE I/O QID timeout during change volume ownership

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874336] [NEW] NVMe/RoCE I/O QID timeout during change volume ownership

2020-04-22 Thread Jennifer Duong
Public bug reported:

On my Ubuntu 20.04 kernel-5.4.0-24-generic nvme-cli 1.9-1 NVMe/RoCE
config, I am seeing I/O QID timeouts when changing the ownership of the
volumes on my E-Series array. From my understanding, this should not be
occurring. My array is optimal while this is occurring and all of my
NVMe/RoCE ports are up and optimal.

Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553911] nvme nvme1: I/O 708 QID 1 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553914] nvme nvme1: I/O 29 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553930] nvme nvme1: I/O 154 QID 4 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553931] nvme nvme1: I/O 695 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553933] nvme nvme1: I/O 709 QID 1 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553935] nvme nvme1: I/O 155 QID 4 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553937] nvme nvme1: I/O 696 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553938] nvme nvme1: I/O 710 QID 1 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553940] nvme nvme1: I/O 571 QID 4 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553942] nvme nvme1: I/O 697 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553943] nvme nvme1: I/O 30 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553946] nvme nvme1: I/O 156 QID 4 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.553952] nvme nvme1: I/O 23 QID 3 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.557842] nvme nvme1: I/O 965 QID 2 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.557845] nvme nvme1: I/O 966 QID 2 
timeout
Apr 20 16:36:14 ICTM1611S01H4 kernel: [ 9819.557847] nvme nvme1: I/O 967 QID 2 
timeout

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvme-cli 1.9-1
ProcVersionSignature: Ubuntu 5.4.0-24.28-generic 5.4.30
Uname: Linux 5.4.0-24-generic x86_64
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Apr 22 16:39:45 2020
InstallationDate: Installed on 2020-04-14 (8 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 
(20200124)
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: nvme-cli
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.nvme.hostnqn: ictm1611s01h1-hostnqn
mtime.conffile..etc.nvme.hostnqn: 2020-04-14T16:03:41.867650

** Affects: nvme-cli (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874336

Title:
  NVMe/RoCE I/O QID timeout during change volume ownership

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874336] Re: NVMe/RoCE I/O QID timeout during change volume ownership

2020-04-22 Thread Jennifer Duong
I'm running with the following cards:

MCX516A-GCAT FW 16.26.1040
MCX516A-CCAT FW 16.26.1040
QL45212H FW 8.37.7.0
MCX416A-CCAT FW 12.27.1016
MCX4121A-ACAT FW 14.27.1016

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874336

Title:
  NVMe/RoCE I/O QID timeout during change volume ownership

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874336/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874270] [NEW] NVMe/FC connections fail to reestablish after controller is reset

2020-04-22 Thread Jennifer Duong
Public bug reported:

My FC host can't seem to reestablish NVMe/FC connections after resetting
one of my E-Series controllers. this is with Ubuntu 20.04
kernel-5.4.0-25-generic nvme-cli 1.9-1. I'm seeing this on my fabric-
attached and direct-connect systems. These are the HBAs I'm running
with:

Emulex LPe16002B-M6 FV12.4.243.11 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe16002B-M6 FV12.4.243.11 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux

QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvme-cli 1.9-1
ProcVersionSignature: Ubuntu 5.4.0-25.29-generic 5.4.30
Uname: Linux 5.4.0-25-generic x86_64
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Apr 22 09:26:00 2020
InstallationDate: Installed on 2020-04-13 (8 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 
(20200124)
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: nvme-cli
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.nvme.hostnqn: ictm1610s01h1-hostnqn
mtime.conffile..etc.nvme.hostnqn: 2020-04-14T16:02:14.512816

** Affects: nvme-cli (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2020-04-22 Thread Jennifer Duong
Also, it does not look like Broadcom's website has an autoconnect script
that supports Ubuntu.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874270] Re: NVMe/FC connections fail to reestablish after controller is reset

2020-04-22 Thread Jennifer Duong
I've attached logs.

** Attachment added: "ICTM1610S01-messages-4-21-20.zip"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+attachment/5357988/+files/ICTM1610S01-messages-4-21-20.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874270

Title:
  NVMe/FC connections fail to reestablish after controller is reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1874270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-04-22 Thread Jennifer Duong
Any update on this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-04-22 Thread Jennifer Duong
Any update on this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867366] Re: hostnqn fails to automatically generate after installing nvme-cli

2020-04-22 Thread Jennifer Duong
Any update on this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867366

Title:
  hostnqn fails to automatically generate after installing nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1867366/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873952] [NEW] Call trace during manual controller reset on NVMe/RoCE array

2020-04-20 Thread Jennifer Duong
Public bug reported:

After manually resetting one of my E-Series NVMe/RoCE controller, I hit
the following call trace:

Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958231] workqueue: WQ_MEM_RECLAIM 
nvme-wq:nvme_rdma_reconnect_ctrl_work [nvme_rdma] is flushing !WQ_MEM_RECLAIM 
ib_addr:process_one_req [ib_core]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958244] WARNING: CPU: 11 PID: 6260 
at kernel/workqueue.c:2610 check_flush_dependency+0x11c/0x140
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958245] Modules linked in: xfs 
nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache rpcrdma 
rdma_ucm ib_iser ib_umad libiscsi ib_ipoib scsi_transport_iscsi intel_rapl_msr 
intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp 
ipmi_ssif kvm_intel kvm intel_cstate intel_rapl_perf joydev input_leds dcdbas 
mei_me mei ipmi_si ipmi_devintf ipmi_msghandler mac_hid acpi_power_meter 
sch_fq_codel nvme_rdma rdma_cm iw_cm ib_cm nvme_fabrics nvme_core sunrpc 
ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov 
async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 
multipath linear mlx5_ib ib_uverbs uas usb_storage ib_core hid_generic usbhid 
hid mgag200 crct10dif_pclmul drm_vram_helper crc32_pclmul i2c_algo_bit ttm 
ghash_clmulni_intel drm_kms_helper ixgbe aesni_intel syscopyarea sysfillrect 
mxm_wmi xfrm_algo sysimgblt crypto_simd mlx5_core fb_sys_fops dca cryptd drm 
glue_helper mdio pci_hyperv_intf ahci lpc_ich tg3
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958305]  tls libahci mlxfw wmi 
scsi_dh_emc scsi_dh_rdac scsi_dh_alua dm_multipath
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958315] CPU: 11 PID: 6260 Comm: 
kworker/u34:3 Not tainted 5.4.0-24-generic #28-Ubuntu
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958316] Hardware name: Dell Inc. 
PowerEdge R730/072T6D, BIOS 2.8.0 005/17/2018
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958321] Workqueue: nvme-wq 
nvme_rdma_reconnect_ctrl_work [nvme_rdma]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958326] RIP: 
0010:check_flush_dependency+0x11c/0x140
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958329] Code: 8d 8b b0 00 00 00 48 
8b 50 18 4d 89 e0 48 8d b1 b0 00 00 00 48 c7 c7 40 f8 75 9d 4c 89 c9 c6 05 f1 
d9 74 01 01 e8 1f 14 fe ff <0f> 0b e9 07 ff ff ff 80 3d df d9 74 01 00 75 92 e9 
3c ff ff ff 66
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958331] RSP: 0018:b34bc4e87bf0 
EFLAGS: 00010086
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958333] RAX:  RBX: 
946423812400 RCX: 
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958334] RDX: 0089 RSI: 
9df926a9 RDI: 0046
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958336] RBP: b34bc4e87c10 R08: 
9df92620 R09: 0089
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958337] R10: 9df92a00 R11: 
9df9268f R12: c09be560
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958338] R13: 9468238b2f00 R14: 
0001 R15: 94682dbbb700
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958341] FS:  
() GS:94682fd4() knlGS:
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958342] CS:  0010 DS:  ES: 
 CR0: 80050033
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958344] CR2: 7ff61cbf4ff8 CR3: 
00040a40a001 CR4: 003606e0
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958345] DR0:  DR1: 
 DR2: 
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958347] DR3:  DR6: 
fffe0ff0 DR7: 0400
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958348] Call Trace:
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958355]  __flush_work+0x97/0x1d0
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958360]  
__cancel_work_timer+0x10e/0x190
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958368]  ? 
dev_printk_emit+0x4e/0x65
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958371]  
cancel_delayed_work_sync+0x13/0x20
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958387]  
rdma_addr_cancel+0x8a/0xb0 [ib_core]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958393]  
cma_cancel_operation+0x72/0x1e0 [rdma_cm]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958398]  
rdma_destroy_id+0x56/0x2f0 [rdma_cm]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958402]  
nvme_rdma_alloc_queue.cold+0x28/0x5b [nvme_rdma]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958406]  
nvme_rdma_setup_ctrl+0x37/0x720 [nvme_rdma]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958412]  ? 
try_to_wake_up+0x224/0x6a0
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958416]  
nvme_rdma_reconnect_ctrl_work+0x27/0x40 [nvme_rdma]
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958419]  
process_one_work+0x1eb/0x3b0
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958422]  worker_thread+0x4d/0x400
Apr 20 14:08:24 ICTM1611S01H4 kernel: [  949.958427]  kthread+0x104/0x140
Apr 20 14:08:24 ICTM1

[Bug 1873952] Re: Call trace during manual controller reset

2020-04-20 Thread Jennifer Duong
** Attachment added: "ICTM1611S01H4-syslog-4-20-20"
   
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1873952/+attachment/5357387/+files/ICTM1611S01H4-syslog-4-20-20

** Summary changed:

- Call trace during manual controller reset
+ Call trace during manual controller reset on NVMe/RoCE array

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873952

Title:
  Call trace during manual controller reset on NVMe/RoCE array

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1873952/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-04-08 Thread Jennifer Duong
Has anyone had a chance to look into this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867366] Re: hostnqn fails to automatically generate after installing nvme-cli

2020-04-08 Thread Jennifer Duong
Has anyone had a chance to look at this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867366

Title:
  hostnqn fails to automatically generate after installing nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1867366/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-04-08 Thread Jennifer Duong
Any update on this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-03-24 Thread Jennifer Duong
Has anyone had a chance to look into this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-03-24 Thread Jennifer Duong
Has anyone had a chance to look into this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867366] Re: hostnqn fails to automatically generate after installing nvme-cli

2020-03-24 Thread Jennifer Duong
Has anyone had a chance to look at this?

** Changed in: nvme-cli (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867366

Title:
  hostnqn fails to automatically generate after installing nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1867366/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-03-17 Thread Jennifer Duong
** Changed in: linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-03-17 Thread Jennifer Duong
** Package changed: ubuntu => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-03-16 Thread Jennifer Duong
What should I try next?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-03-16 Thread Jennifer Duong
** Attachment added: "apport.linux-image-generic.af19wif1.apport"
   
https://bugs.launchpad.net/ubuntu/+bug/1867686/+attachment/5337678/+files/apport.linux-image-generic.af19wif1.apport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] [NEW] FC hosts become unresponsive during array upgrade

2020-03-16 Thread Jennifer Duong
Public bug reported:

While running automated array upgrade, my FC hosts become unresponsive
and / or stop logging . Sometimes the host stop logging to
/var/log/syslog, but I can still SSH into the host. Sometimes I try to
SSH to the host in the middle of the test and it'll prompt me for a
username that I then input, but it hangs there indefinitely and I have
to power cycle the host. Other times my host becomes completely
unresponsive and I can't SSH into the host and have to power cycle in
order to gain access again. I thought the host might be crashing, but
I'm not seeing any file get generated in /var/crash. I also thought my
hosts might be going to sleep or hibernating, but I ran "sudo systemctl
mask sleep.target suspend.target hibernate.target hybrid-sleep.target"
and am not seeing any improvements.

** Affects: ubuntu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867686] Re: FC hosts become unresponsive during array upgrade

2020-03-16 Thread Jennifer Duong
** Attachment added: "ICTM1610S01-syslog-3-13-20"
   
https://bugs.launchpad.net/ubuntu/+bug/1867686/+attachment/5337679/+files/ICTM1610S01-syslog-3-13-20.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867686

Title:
  FC hosts become unresponsive during array upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1867686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867366] [NEW] hostnqn fails to automatically generate after installing nvme-cli

2020-03-13 Thread Jennifer Duong
Public bug reported:

hostnqn fails to automatically generate after installing nvme-cli

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvme-cli 1.9-1
ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
Uname: Linux 5.4.0-9-generic x86_64
ApportVersion: 2.20.11-0ubuntu18
Architecture: amd64
Date: Fri Mar  6 14:09:20 2020
Dependencies:
 gcc-9-base 9.2.1-21ubuntu1
 libc6 2.30-0ubuntu3
 libgcc1 1:9.2.1-21ubuntu1
 libidn2-0 2.2.0-2
 libunistring2 0.9.10-2
InstallationDate: Installed on 2020-03-05 (0 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 
(20200124)
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: nvme-cli
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.nvme.hostnqn: [modified]
mtime.conffile..etc.nvme.hostnqn: 2020-03-06T11:27:08.674276

** Affects: nvme-cli (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867366

Title:
  hostnqn fails to automatically generate after installing nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/1867366/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-03-10 Thread Jennifer Duong
Any update on this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-02-28 Thread Jennifer Duong
Paolo, I've upgraded to kernel-5.4.0-16-generic and am still
encountering this issue. Any suggestions on what to try next? Are there
any additional logs that you'd like for me to grab?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-02-21 Thread Jennifer Duong
Any suggestions on what to try next?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-02-12 Thread Jennifer Duong
Paolo, I updated to kernel-5.4.0-14-generic and am still encountering
this issue. Is there any additional logs you may need from me?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-02-06 Thread Jennifer Duong
Paolo, what should my next steps be?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-02-03 Thread Jennifer Duong
Mauricio, this appears to be resolved after upgrading to the latest
packages in -proposed. Will these changes be pushed into whichever ISO
GA's?

root@ICTM1610S01H1:~# multipath -ll
3600a098000a0a2bc75685ddf9fc5 dm-8 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:141 sdap  66:144  active ready running
| |- 12:0:0:141 sdbs  68:96   active ready running
| |- 13:0:1:141 sdfb  129:208 active ready running
| |- 14:0:0:141 sdge  131:160 active ready running
| |- 15:0:1:141 sdkn  66:432  active ready running
| `- 16:0:0:141 sdjj  8:464   active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:141 sdm   8:192   active ready running
  |- 12:0:1:141 sdcv  70:48   active ready running
  |- 13:0:0:141 sddy  128:0   active ready running
  |- 14:0:1:141 sdhh  133:112 active ready running
  |- 15:0:0:141 sdin  135:112 active ready running
  `- 16:0:1:141 sdlh  67:496  active ready running
3600a098000a0a2bc756f5ddf9ffb dm-21 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:151 sdaz  67:48   active ready running
| |- 12:0:0:151 sdcc  69:0active ready running
| |- 13:0:1:151 sdfl  130:112 active ready running
| |- 14:0:0:151 sdgo  132:64  active ready running
| |- 15:0:1:151 sdma  69:288  active ready running
| `- 16:0:0:151 sdjx  65:432  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:151 sdw   65:96   active ready running
  |- 12:0:1:151 sddf  70:208  active ready running
  |- 13:0:0:151 sdei  128:160 active ready running
  |- 14:0:1:151 sdhr  134:16  active ready running
  |- 15:0:0:151 sdjd  8:368   active ready running
  `- 16:0:1:151 sdls  68:416  active ready running
3600a098000a0a2bc756a5ddf9fd0 dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:143 sdar  66:176  active ready running
| |- 12:0:0:143 sdbu  68:128  active ready running
| |- 13:0:1:143 sdfd  129:240 active ready running
| |- 14:0:0:143 sdgg  131:192 active ready running
| |- 15:0:1:143 sdkq  66:480  active ready running
| `- 16:0:0:143 sdjn  65:272  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:143 sdo   8:224   active ready running
  |- 12:0:1:143 sdcx  70:80   active ready running
  |- 13:0:0:143 sdea  128:32  active ready running
  |- 14:0:1:143 sdhj  133:144 active ready running
  |- 15:0:0:143 sdir  135:176 active ready running
  `- 16:0:1:143 sdlj  68:272  active ready running
3600a098000a0a28a9d785ddf9f0f dm-12 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:152 sdx   65:112  active ready running
| |- 12:0:1:152 sddg  70:224  active ready running
| |- 13:0:0:152 sdej  128:176 active ready running
| |- 14:0:1:152 sdhs  134:32  active ready running
| |- 15:0:0:152 sdjg  8:416   active ready running
| `- 16:0:1:152 sdlu  68:448  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:1:152 sdba  67:64   active ready running
  |- 12:0:0:152 sdcd  69:16   active ready running
  |- 13:0:1:152 sdfm  130:128 active ready running
  |- 14:0:0:152 sdgp  132:80  active ready running
  |- 15:0:1:152 sdmb  69:304  active ready running
  `- 16:0:0:152 sdka  65:480  active ready running
3600a098000a0a28a9d675ddf9eac dm-1 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:134 sdf   8:80active ready running
| |- 12:0:1:134 sdco  69:192  active ready running
| |- 13:0:0:134 sddr  71:144  active ready running
| |- 14:0:1:134 sdha  133:0   active ready running
| |- 15:0:0:134 sdic  134:192 active ready running
| `- 16:0:1:134 sdla  67:384  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:1:134 sdai  66:32   active ready running
  |- 12:0:0:134 sdbl  67:240  active ready running
  |- 13:0:1:134 sdeu  129:96  active ready running
  |- 14:0:0:134 sdfx  131:48  active ready running
  |- 15:0:1:134 sdke  66:288  active ready running
  `- 16:0:0:134 sdis  135:192 active ready running
3600a098000a0a28a9d655ddf9ea1 dm-0 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:132 sdd   8:48active ready running
| |- 12:0:1:132 sdcm  69:160  active ready r

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-02-03 Thread Jennifer Duong
Paolo, it doesn't look like the newest Focal kernel resolves this issue.
I've attached the output from a working kernel.

root@ICTM1610S01H2:~# cat /sys/class/fc_host/host*/symbolic_name
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k

** Attachment added: "documents_20200203.zip"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+attachment/5325050/+files/documents_20200203.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-02-03 Thread Jennifer Duong
** Attachment added: 
"sosreport-ICTM1610S01H1-lp1860587-2020-02-03-ntoqvna.tar.xz"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+attachment/5325009/+files/sosreport-ICTM1610S01H1-lp1860587-2020-02-03-ntoqvna.tar.xz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-31 Thread Jennifer Duong
Mauricio, how should I go about this if all of my servers are SANboot?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-30 Thread Jennifer Duong
Mauricio,

I should be seeing 12 paths for each LUN on my Emulex fabric-attached
host and 4 paths for my Emulex direct-connect host. It looks like my
Qlogic host fabric-attached host is now encountering this path listing
issue too.

root@ICTM1610S01H2:~# multipath -ll
3600a098000a0a2bc75865ddfa06a dm-8 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:141 sdm  8:192   active ready running
3600a098000a0a2bc758b5ddfa08e dm-5 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:147 sds  65:32   active ready running
3600a098000a0a28a9d875ddf9f59 dm-14 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:136 sdh  8:112   active ready running
3600a098000a0a28a9d8f5ddf9f88 dm-19 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:144 sdp  8:240   active ready running
3600a098000a0a2bc757c5ddfa02f dm-20 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:131 sdc  8:32active ready running
3600a098000a0a2bc758c5ddfa09a dm-13 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:149 sdu  65:64   active ready running
3600a098000a0a28a9d915ddf9fa1 dm-21 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:148 sdt  65:48   active ready running
3600a098000a0a2bc75915ddfa0b6 dm-16 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:153 sdy  65:128  active ready running
3600a098000a0a28a9d855ddf9f4e dm-1 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=10 status=active
  `- 11:0:0:134 sdf  8:80active ready running
3600a098000a0a2bc75845ddfa05e dm-10 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:139 sdk  8:160   active ready running
3600a098000a0a28a9d895ddf9f66 dm-23 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:1:138 sdam 66:96   active ready running
| |- 12:0:1:138 sdeb 128:48  active ready running
| |- 13:0:1:138 sdey 129:160 active ready running
| `- 14:0:0:138 sdgb 131:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:0:138 sdj  8:144   active ready running
  |- 12:0:0:138 sdcj 69:112  active ready running
  |- 13:0:0:138 sdbq 68:64   active ready running
  `- 14:0:1:138 sdhe 133:64  active ready running
3600a098000a0a2bc75825ddfa053 dm-7 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:137 sdi  8:128   active ready running
3600a098000a0a28a9d8d5ddf9f7c dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:1:142 sdaq 66:160  active ready running
3600a098000a0a2bc75955ddfa0c4 dm-26 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 11:0:0:155 sdaa 65:160  active ready running
| |- 12:0:0:155 sddk 71:32   active ready running
| |- 13:0:0:155 sdcw 70:64   active ready running
| `- 14:0:1:155 sdhv 134:80  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 11:0:1:155 sdbd 67:112  active ready running
  |- 12:0:1:155 sdes 129:64  active ready running
  |- 13:0:1:155 sdfp 130:176 active ready running
  `- 14:0:0:155 sdgs 132:128 active ready running
3600a098000a0a2bc758d5ddfa0a6 dm-18 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:151 sdw  65:96   active ready running
3600a098000a0a2bc758a5ddfa082 dm-2 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 11:0:0:145 sdq  65:0active ready running
3600a098000a0a2bc75885ddfa076 dm-9 NETAPP

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-30 Thread Jennifer Duong
** Attachment added: "1-30-20.zip"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+attachment/5324288/+files/1-30-20.zip

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

2020-01-27 Thread Jennifer Duong
** Summary changed:

- QLogic Direct-Connect host can't see SCSI-FC devices
+ QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

** Description changed:

  My QLogic direct-connect host can't seem to SANboot or see any SCSI-FC
- devices in general. I'm running with Ubuntu 20.04
- kernel-5.4.0-9-generic.
+ devices in general. I'm also not able to discover any NVMe devices. I'm
+ running with Ubuntu 20.04 kernel-5.4.0-9-generic.
  
  There are the HBAs I'm running with:
  
  root@ICTM1610S01H4:~# cat /sys/class/fc_host/host*/symbolic_name
  QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
  QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
  QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
  QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
  
  lsscsi and multipath -ll don't seem to see my SCSI devices:
  
  root@ICTM1610S01H4:/opt/iop/linux/scratch# multipath -ll
  root@ICTM1610S01H4:/opt/iop/linux/scratch# lsscsi
  [0:0:0:0]cd/dvd  KVM  vmDisk-CD0.01  /dev/sr0
  [1:0:0:0]cd/dvd  HL-DT-ST DVDRAM GUD0N PF02  /dev/sr1
  [3:0:0:0]diskATA  ST1000NX0313 SNA3  /dev/sda
  
  It doesn't appear to be a configuration/hardware issue as installing
  Ubuntu 18.04 on the same exact server is able to SANboot and see my SCSI
  devices.
  
  root@ICTM1610S01H4:/opt/iop/linux/scratch# lsb_release -rd
  Description:Ubuntu Focal Fossa (development branch)
  Release:20.04
  root@ICTM1610S01H4:/opt/iop/linux/scratch# apt-cache policy 
linux-image-generic
  linux-image-generic:
-   Installed: 5.4.0.9.11
-   Candidate: 5.4.0.9.11
-   Version table:
-  *** 5.4.0.9.11 500
- 500 http://repomirror-ict.eng.netapp.com/ubuntu focal/main amd64 
Packages
- 100 /var/lib/dpkg/status
+   Installed: 5.4.0.9.11
+   Candidate: 5.4.0.9.11
+   Version table:
+  *** 5.4.0.9.11 500
+ 500 http://repomirror-ict.eng.netapp.com/ubuntu focal/main amd64 
Packages
+ 100 /var/lib/dpkg/status
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-generic 5.4.0.9.11
  ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
  Uname: Linux 5.4.0-9-generic x86_64
  AlsaDevices:
-  total 0
-  crw-rw 1 root audio 116,  1 Jan 23 11:32 seq
-  crw-rw 1 root audio 116, 33 Jan 23 11:32 timer
+  total 0
+  crw-rw 1 root audio 116,  1 Jan 23 11:32 seq
+  crw-rw 1 root audio 116, 33 Jan 23 11:32 timer
  AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
  ApportVersion: 2.20.11-0ubuntu15
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 
'arecord'
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  Date: Thu Jan 23 15:04:42 2020
  InstallationDate: Installed on 2020-01-23 (0 days ago)
  InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 
(20200107)
  IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig'
  MachineType: FUJITSU PRIMERGY RX2540 M4
  PciMultimedia:
-  
+ 
  ProcEnviron:
-  TERM=xterm
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=
-  LANG=en_US.UTF-8
-  SHELL=/bin/bash
+  TERM=xterm
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=en_US.UTF-8
+  SHELL=/bin/bash
  ProcFB: 0 mgag200drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-9-generic 
root=UUID=9b9e6b1a-b8d9-4d9c-8782-36729d7f88a4 ro console=tty0 
console=ttyS0,115200n8
  RelatedPackageVersions:
-  linux-restricted-modules-5.4.0-9-generic N/A
-  linux-backports-modules-5.4.0-9-generic  N/A
-  linux-firmware   1.184
+  linux-restricted-modules-5.4.0-9-generic N/A
+  linux-backports-modules-5.4.0-9-generic  N/A
+  linux-firmware   1.184
  RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill'
  SourcePackage: linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 06/25/2019
  dmi.bios.vendor: FUJITSU // American Megatrends Inc.
  dmi.bios.version: V5.0.0.12 R1.35.0 for D3384-A1x
  dmi.board.name: D3384-A1
  dmi.board.vendor: FUJITSU
  dmi.board.version: S26361-D3384-A13 WGS04 GS04
  dmi.chassis.asset.tag: System Asset Tag
  dmi.chassis.type: 23
  dmi.chassis.vendor: FUJITSU
  dmi.chassis.version: RX2540M4R4
  dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.12R1.35.0forD3384-A1x:bd06/25/2019:svnFUJITSU:pnPRIMERGYRX2540M4:pvr:rvnFUJITSU:rnD3384-A1:rvrS26361-D3384-A13WGS04GS04:cvnFUJITSU:ct23:cvrRX2540M4R4:
  dmi.product.family: SERVER
  dmi.product.name: PRIMERGY RX2540 M4
  dmi.product.sku: S26361-K1567-Vxxx
  dmi.sys.vendor: FUJITSU

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't discover SCSI-FC or NVMe/FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-27 Thread Jennifer Duong
** Attachment added: "multipath-d-v3.log"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+attachment/5323319/+files/multipath-d-v3.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-24 Thread Jennifer Duong
Paride, we have support for Ubuntu 18.04 but that was running with older
Qlogic and Emulex HBAs. I would assume if we have support for those
solutions, that multipath did not behave like how it is now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] Re: QLogic Direct-Connect host can't see SCSI-FC devices

2020-01-23 Thread Jennifer Duong
** Attachment added: "lspci-vnvn.log"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+attachment/5322515/+files/lspci-vnvn.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't see SCSI-FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860724] [NEW] QLogic Direct-Connect host can't see SCSI-FC devices

2020-01-23 Thread Jennifer Duong
Public bug reported:

My QLogic direct-connect host can't seem to SANboot or see any SCSI-FC
devices in general. I'm running with Ubuntu 20.04
kernel-5.4.0-9-generic.

There are the HBAs I'm running with:

root@ICTM1610S01H4:~# cat /sys/class/fc_host/host*/symbolic_name
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k

lsscsi and multipath -ll don't seem to see my SCSI devices:

root@ICTM1610S01H4:/opt/iop/linux/scratch# multipath -ll
root@ICTM1610S01H4:/opt/iop/linux/scratch# lsscsi
[0:0:0:0]cd/dvd  KVM  vmDisk-CD0.01  /dev/sr0
[1:0:0:0]cd/dvd  HL-DT-ST DVDRAM GUD0N PF02  /dev/sr1
[3:0:0:0]diskATA  ST1000NX0313 SNA3  /dev/sda

It doesn't appear to be a configuration/hardware issue as installing
Ubuntu 18.04 on the same exact server is able to SANboot and see my SCSI
devices.

root@ICTM1610S01H4:/opt/iop/linux/scratch# lsb_release -rd
Description:Ubuntu Focal Fossa (development branch)
Release:20.04
root@ICTM1610S01H4:/opt/iop/linux/scratch# apt-cache policy linux-image-generic
linux-image-generic:
  Installed: 5.4.0.9.11
  Candidate: 5.4.0.9.11
  Version table:
 *** 5.4.0.9.11 500
500 http://repomirror-ict.eng.netapp.com/ubuntu focal/main amd64 
Packages
100 /var/lib/dpkg/status

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: linux-image-generic 5.4.0.9.11
ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
Uname: Linux 5.4.0-9-generic x86_64
AlsaDevices:
 total 0
 crw-rw 1 root audio 116,  1 Jan 23 11:32 seq
 crw-rw 1 root audio 116, 33 Jan 23 11:32 timer
AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay'
ApportVersion: 2.20.11-0ubuntu15
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 'arecord'
AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
Date: Thu Jan 23 15:04:42 2020
InstallationDate: Installed on 2020-01-23 (0 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 
(20200107)
IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig'
MachineType: FUJITSU PRIMERGY RX2540 M4
PciMultimedia:
 
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcFB: 0 mgag200drmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-9-generic 
root=UUID=9b9e6b1a-b8d9-4d9c-8782-36729d7f88a4 ro console=tty0 
console=ttyS0,115200n8
RelatedPackageVersions:
 linux-restricted-modules-5.4.0-9-generic N/A
 linux-backports-modules-5.4.0-9-generic  N/A
 linux-firmware   1.184
RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill'
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 06/25/2019
dmi.bios.vendor: FUJITSU // American Megatrends Inc.
dmi.bios.version: V5.0.0.12 R1.35.0 for D3384-A1x
dmi.board.name: D3384-A1
dmi.board.vendor: FUJITSU
dmi.board.version: S26361-D3384-A13 WGS04 GS04
dmi.chassis.asset.tag: System Asset Tag
dmi.chassis.type: 23
dmi.chassis.vendor: FUJITSU
dmi.chassis.version: RX2540M4R4
dmi.modalias: 
dmi:bvnFUJITSU//AmericanMegatrendsInc.:bvrV5.0.0.12R1.35.0forD3384-A1x:bd06/25/2019:svnFUJITSU:pnPRIMERGYRX2540M4:pvr:rvnFUJITSU:rnD3384-A1:rvrS26361-D3384-A13WGS04GS04:cvnFUJITSU:ct23:cvrRX2540M4R4:
dmi.product.family: SERVER
dmi.product.name: PRIMERGY RX2540 M4
dmi.product.sku: S26361-K1567-Vxxx
dmi.sys.vendor: FUJITSU

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860724

Title:
  QLogic Direct-Connect host can't see SCSI-FC devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1860724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] Re: multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-22 Thread Jennifer Duong
apport.multipath-tools.z4yugdhx.apport

** Attachment added: "apport.multipath-tools.z4yugdhx.apport"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+attachment/5322201/+files/apport.multipath-tools.z4yugdhx.apport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860587

Title:
  multipath -ll doesn't discover down all paths on Emulex hosts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1860587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860587] [NEW] multipath -ll doesn't discover down all paths on Emulex hosts

2020-01-22 Thread Jennifer Duong
Public bug reported:

root@ICTM1610S01H1:/opt/iop/usr/jduong# apt-cache show multipath-tools
Package: multipath-tools
Architecture: amd64
Version: 0.7.9-3ubuntu7
Priority: extra
Section: admin
Origin: Ubuntu
Maintainer: Ubuntu Developers 
Original-Maintainer: Debian DM Multipath Team 

Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1141
Depends: libaio1 (>= 0.3.106-8), libc6 (>= 2.29), libdevmapper1.02.1 (>= 
2:1.02.97), libjson-c4 (>= 0.13.1), libreadline8 (>= 6.0), libsystemd0, 
libudev1 (>= 183), liburcu6 (>= 0.11.1), udev (>> 136-1), kpartx (>= 
0.7.9-3ubuntu7), lsb-base (>= 3.0-6), sg3-utils-udev
Suggests: multipath-tools-boot
Breaks: multipath-tools-boot (<= 0.4.8+git0.761c66f-2~), 
multipath-tools-initramfs (<= 1.0.1)
Filename: pool/main/m/multipath-tools/multipath-tools_0.7.9-3ubuntu7_amd64.deb
Size: 276004
MD5sum: ddf4c86498c054621e6aff07d9e71f84
SHA1: e6baf43104651d7346389e4edfa9363902a0ed62
SHA256: 858b9dd5c4597a20d9f44eb03f2b22a3835d14ced541bde59b74bbbcc568d7f9
Homepage: http://christophe.varoqui.free.fr/
Description-en: maintain multipath block device access
 These tools are in charge of maintaining the disk multipath device maps and
 react to path and map events.
 .
 If you install this package you may have to change the way you address block
 devices. See README.Debian for details.
Description-md5: d2b50f6d45021a3e6697180f992bb365
Task: server, cloud-image
Supported: 9m

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsb_release -rd
Description:Ubuntu Focal Fossa (development branch)
Release:20.04

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsb_release -rd
Description:Ubuntu Focal Fossa (development branch)
Release:20.04
root@ICTM1610S01H1:/opt/iop/usr/jduong# apt-cache policy multipath-tools
multipath-tools:
  Installed: 0.7.9-3ubuntu7
  Candidate: 0.7.9-3ubuntu7
  Version table:
 *** 0.7.9-3ubuntu7 500
500 http://repomirror-ict.eng.netapp.com/ubuntu focal/main amd64 
Packages
100 /var/lib/dpkg/status

Both hosts have the following Emulex HBAs:

root@ICTM1610S01H1:/opt/iop/usr/jduong# cat 
/sys/class/fc_host/host*/symbolic_name
Emulex LPe16002B-M6 FV12.4.243.11 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe16002B-M6 FV12.4.243.11 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.4.0.0. HN:ICTM1610S01H1. OS:Linux

This is what I’m seeing when I run multipath -ll on my Emulex fabric-
attached host:

root@ICTM1610S01H1:/opt/iop/usr/jduong# multipath -ll
3600a098000a0a2bc75685ddf9fc5 dm-13 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:141 sdm  8:192   active ready running
3600a098000a0a2bc756f5ddf9ffb dm-24 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:151 sdw  65:96   active ready running
3600a098000a0a2bc756a5ddf9fd0 dm-11 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:143 sdo  8:224   active ready running
3600a098000a0a28a9d785ddf9f0f dm-20 NETAPP,INF-01-00
size=18G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:152 sdx  65:112  active ready running
3600a098000a0a28a9d675ddf9eac dm-28 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:134 sdf  8:80active ready running
3600a098000a0a28a9d655ddf9ea1 dm-7 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:132 sdd  8:48active ready running
3600a098000a0a2bc756c5ddf9fd9 dm-12 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:145 sdq  65:0active ready running
3600a098000a0a2bc75665ddf9fbb dm-1 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:139 sdk  8:160   active ready running
3600a098000a0a2bc75645ddf9fb0 dm-3 NETAPP,INF-01-00
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' 
wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 12:0:0:137 sdi  8:128   active ready running

If I grep the LUN with lsscsi, it shows all 12 paths:

root@ICTM1610S01H1:/opt/iop/usr/jduong# lsscsi | grep 141