Re: NFSv4: Invalid fstype: Invalid argument

2022-07-03 Thread FreeBSD User
Am Sun, 3 Jul 2022 11:57:33 +0200
FreeBSD User  schrieb:

Sorry for the noise, the learning process is still in progress and I learned 
about the
until-now-not-fully-understood V4: entry in /etc/exports.

Thanks for reading and patience.

regards,

oh

> Hello folks,
> 
> 
> Trying to mount a NFS filesystem offered by a FreeBSD 12.3-p5 (XigmaNAS) from 
> a recent
> CURRENT (FreeBSD 14.0-CURRENT #19 main-n256512-ef86876b846: Sat Jul  2 
> 23:31:53 CEST 2022
> amd64) via
> 
> :/etc # mount -t nfs -o vers=4 192.168.0.11:/mnt/NAS00/home /tmp/mnt
> 
> results in
> 
> mount_nfs: nmount: /tmp/mnt, Invalid fstype: Invalid argument
> 
> and checking whether I can mount NFSv3 (I have explicitely set NFSv4 only on 
> the server side,
> see below) via
> 
> :/etc # mount -t nfs -o vers=3,mntudp 192.168.0.11:/mnt/NAS00/home /tmp/mnt
> [udp] 192.168.0.11:/mnt/NAS00/home: RPCPROG_NFS: RPC: Program not registered
> or
> :/etc # mount -t nfs -o vers=3,mntudp [fd01:a37::11]:/mnt/NAS00/home /tmp/mnt
> [udp6] fd01:a37::11:/mnt/NAS00/home: RPCPROG_NFS: RPC: Program not registered
> 
> Womdering about the TCP conenction attempts, I checked the configurations on 
> both, the
> CURRENT and XigmaNAS (server) side.
> 
> [... server side ...]
> nas01: ~# ps -waux|grep mountd
> root   4332   0.0  0.0  11684  2652  -  Is   23:13  0:00.01 
> /usr/sbin/mountd -l -r -S
> -R /etc/exports /etc/zfs/exports
> 
> rpcinfo -p
>program vers proto   port  service
> 104   tcp111  rpcbind
> 103   tcp111  rpcbind
> 102   tcp111  rpcbind
> 104   udp111  rpcbind
> 103   udp111  rpcbind
> 102   udp111  rpcbind
> 104 local111  rpcbind
> 103 local111  rpcbind
> 102 local111  rpcbind
> 1000241   udp671  status
> 1000241   tcp671  status
> 1000210   udp   1003  nlockmgr
> 1000210   tcp603  nlockmgr
> 1000211   udp   1003  nlockmgr
> 1000211   tcp603  nlockmgr
> 1000213   udp   1003  nlockmgr
> 1000213   tcp603  nlockmgr
> 1000214   udp   1003  nlockmgr
> 1000214   tcp603  nlockmgr
> 
> I do not see mountd nor nfsd being registered, hope this is all right within 
> a NFSv4-only
> environment.
> 
> Well, on the CURRENT server that provides without flaws NFSv4 for the CURRENT 
> client I try to
> access the XigmaNAS NFSv4 fs from, rpcinfo looks like:
> 
> (current server):
> root@walhall:/usr/src # rpcinfo -p
>program vers proto   port  service
> 104   tcp111  rpcbind
> 103   tcp111  rpcbind
> 102   tcp111  rpcbind
> 104   udp111  rpcbind
> 103   udp111  rpcbind
> 102   udp111  rpcbind
> 104 local111  rpcbind
> 103 local111  rpcbind
> 102 local111  rpcbind
> 1000241   udp774  status
> 1000241   tcp774  status
> 1000210   udp746  nlockmgr
> 1000210   tcp661  nlockmgr
> 1000211   udp746  nlockmgr
> 1000211   tcp661  nlockmgr
> 1000213   udp746  nlockmgr
> 1000213   tcp661  nlockmgr
> 1000214   udp746  nlockmgr
> 1000214   tcp661  nlockmgr
> 
> Well, I also checked from client to current server and client to XigmaNAS 
> server via rpcinfo
> -p and I get always the very same result.
> 
> Checking the accessibility of the server host on the net via nmap gives this 
> result (please
> be aware we use a dual stack network and need both IPv6 and IPv4 access so 
> this attempt shows
> IPv4 access, but IPv6 access is also granted and assured):
> 
> UDP:
> :/etc # nmap -sU 192.168.0.11
> Starting Nmap 7.91 ( https://nmap.org ) at 2022-07-03 11:05 CEST
> Nmap scan report for nas01.intern (192.168.0.11)
> Host is up (0.00094s latency).
> Not shown: 996 closed ports
> PORT STATE SERVICE
> 111/udp  open  rpcbind
> 514/udp  open|filtered syslog
> 2049/udp open  nfs
> 5353/udp open  zeroconf
> 
> and TCP (since port 2049/NFSv4 is tcp):
> :/etc # nmap -sS 192.168.0.11
> Starting Nmap 7.91 ( https://nmap.org ) at 2022-07-03 11:34 CEST
> Nmap scan report for nas01.intern (192.168.0.11)
> Host is up (0.00074s latency).
> Not shown: 996 closed ports
> PORT STATE SERVICE
> 22/tcp   open  ssh
> 111/tcp  open  rpcbind
> 443/tcp  open  https
> 2049/tcp open  nfs
> 
> I'm out of ideas here. What does 
> 
> mount_nfs: nmount: /tmp/mnt, Invalid fstype: Invalid argument
> 
> mean? Is it the server reporting that it doesn't serve the requested fstyp or 
> is there an
> issue with the local filesystem/mountpoint (located on UFS/FFS, the backend 
> NFS fs are all
> located on ZFS)?
> 
> I'm drifting like a dead man in the water here and I did not find aome 
> answeres to the error
> reported here in the net applicable to the problem 

NFSv4: Invalid fstype: Invalid argument

2022-07-03 Thread FreeBSD User
Hello folks,


Trying to mount a NFS filesystem offered by a FreeBSD 12.3-p5 (XigmaNAS) from a 
recent CURRENT
(FreeBSD 14.0-CURRENT #19 main-n256512-ef86876b846: Sat Jul  2 23:31:53 CEST 
2022 amd64) via

:/etc # mount -t nfs -o vers=4 192.168.0.11:/mnt/NAS00/home /tmp/mnt

results in

mount_nfs: nmount: /tmp/mnt, Invalid fstype: Invalid argument

and checking whether I can mount NFSv3 (I have explicitely set NFSv4 only on 
the server side,
see below) via

:/etc # mount -t nfs -o vers=3,mntudp 192.168.0.11:/mnt/NAS00/home /tmp/mnt
[udp] 192.168.0.11:/mnt/NAS00/home: RPCPROG_NFS: RPC: Program not registered
or
:/etc # mount -t nfs -o vers=3,mntudp [fd01:a37::11]:/mnt/NAS00/home /tmp/mnt
[udp6] fd01:a37::11:/mnt/NAS00/home: RPCPROG_NFS: RPC: Program not registered

Womdering about the TCP conenction attempts, I checked the configurations on 
both, the CURRENT
and XigmaNAS (server) side.

[... server side ...]
nas01: ~# ps -waux|grep mountd
root   4332   0.0  0.0  11684  2652  -  Is   23:13  0:00.01 
/usr/sbin/mountd -l -r -S
-R /etc/exports /etc/zfs/exports

rpcinfo -p
   program vers proto   port  service
104   tcp111  rpcbind
103   tcp111  rpcbind
102   tcp111  rpcbind
104   udp111  rpcbind
103   udp111  rpcbind
102   udp111  rpcbind
104 local111  rpcbind
103 local111  rpcbind
102 local111  rpcbind
1000241   udp671  status
1000241   tcp671  status
1000210   udp   1003  nlockmgr
1000210   tcp603  nlockmgr
1000211   udp   1003  nlockmgr
1000211   tcp603  nlockmgr
1000213   udp   1003  nlockmgr
1000213   tcp603  nlockmgr
1000214   udp   1003  nlockmgr
1000214   tcp603  nlockmgr

I do not see mountd nor nfsd being registered, hope this is all right within a 
NFSv4-only
environment.

Well, on the CURRENT server that provides without flaws NFSv4 for the CURRENT 
client I try to
access the XigmaNAS NFSv4 fs from, rpcinfo looks like:

(current server):
root@walhall:/usr/src # rpcinfo -p
   program vers proto   port  service
104   tcp111  rpcbind
103   tcp111  rpcbind
102   tcp111  rpcbind
104   udp111  rpcbind
103   udp111  rpcbind
102   udp111  rpcbind
104 local111  rpcbind
103 local111  rpcbind
102 local111  rpcbind
1000241   udp774  status
1000241   tcp774  status
1000210   udp746  nlockmgr
1000210   tcp661  nlockmgr
1000211   udp746  nlockmgr
1000211   tcp661  nlockmgr
1000213   udp746  nlockmgr
1000213   tcp661  nlockmgr
1000214   udp746  nlockmgr
1000214   tcp661  nlockmgr

Well, I also checked from client to current server and client to XigmaNAS 
server via rpcinfo
-p and I get always the very same result.

Checking the accessibility of the server host on the net via nmap gives this 
result (please be
aware we use a dual stack network and need both IPv6 and IPv4 access so this 
attempt shows
IPv4 access, but IPv6 access is also granted and assured):

UDP:
:/etc # nmap -sU 192.168.0.11
Starting Nmap 7.91 ( https://nmap.org ) at 2022-07-03 11:05 CEST
Nmap scan report for nas01.intern (192.168.0.11)
Host is up (0.00094s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
111/udp  open  rpcbind
514/udp  open|filtered syslog
2049/udp open  nfs
5353/udp open  zeroconf

and TCP (since port 2049/NFSv4 is tcp):
:/etc # nmap -sS 192.168.0.11
Starting Nmap 7.91 ( https://nmap.org ) at 2022-07-03 11:34 CEST
Nmap scan report for nas01.intern (192.168.0.11)
Host is up (0.00074s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp   open  ssh
111/tcp  open  rpcbind
443/tcp  open  https
2049/tcp open  nfs

I'm out of ideas here. What does 

mount_nfs: nmount: /tmp/mnt, Invalid fstype: Invalid argument

mean? Is it the server reporting that it doesn't serve the requested fstyp or 
is there an
issue with the local filesystem/mountpoint (located on UFS/FFS, the backend NFS 
fs are all
located on ZFS)?

I'm drifting like a dead man in the water here and I did not find aome answeres 
to the error
reported here in the net applicable to the problem seen.

Some hints are highly appreciated.

Tahnks in advance and kind regards,

oh





-- 
O. Hartmann



emulated nvme on vmware workstation whines

2022-07-03 Thread Yuri
I started seeing the following whines from emulated nvme on vmware
workstation some time ago, and it seems to happen almost exclusively
during the nightly periodic run:

Jul  3 03:01:47 titan kernel: nvme0: RECOVERY_START 48053105730250 vs
48051555254036
Jul  3 03:01:47 titan kernel: nvme0: timeout with nothing complete,
resetting
Jul  3 03:01:47 titan kernel: nvme0: Resetting controller due to a timeout.
Jul  3 03:01:47 titan kernel: nvme0: RECOVERY_WAITING
Jul  3 03:01:47 titan kernel: nvme0: resetting controller
Jul  3 03:01:47 titan kernel: nvme0: temperature threshold not supported
Jul  3 03:01:47 titan kernel: nvme0: resubmitting queued i/o
Jul  3 03:01:47 titan kernel: nvme0: READ sqid:10 cid:0 nsid:1
lba:8931648 len:8
Jul  3 03:01:47 titan kernel: nvme0: aborting outstanding i/o
Jul  3 03:02:23 titan kernel: nvme0: RECOVERY_START 48207341204759 vs
48207032791553
Jul  3 03:02:23 titan kernel: nvme0: RECOVERY_START 48207722834338 vs
48207032791553
Jul  3 03:02:23 titan kernel: nvme0: timeout with nothing complete,
resetting
Jul  3 03:02:23 titan kernel: nvme0: Resetting controller due to a timeout.
Jul  3 03:02:23 titan kernel: nvme0: RECOVERY_WAITING
Jul  3 03:02:23 titan kernel: nvme0: resetting controller
Jul  3 03:02:23 titan kernel: nvme0: temperature threshold not supported
Jul  3 03:02:23 titan kernel: nvme0: aborting outstanding i/o
Jul  3 03:02:23 titan kernel: nvme0: resubmitting queued i/o
Jul  3 03:02:23 titan kernel: nvme0: WRITE sqid:1 cid:0 nsid:1
lba:27806528 len:8
Jul  3 03:02:23 titan kernel: nvme0: aborting outstanding i/o

It does not seem to break anything, i.e. no errors from periodic, zpool
status and zpool scrub do not show any issues as well, so I'm just
wondering if there is some obvious reason for this.

I am staying current with, well, -CURRENT and VMware Workstation
versions (now at 16.2.3), and do not remember when exactly this started.
 There are no other VMs doing anything at that time, host system is
pretty idle as well.