Hello,
Unable to mount the CephFS file system from client node with "mount error 5 =
Input/output error"
MDS was installed on a separate node. Ceph Cluster health is OK and mds
services are running. firewall was disabled across all the nodes in a cluster.
-- Ceph Cluster Nodes (RHEL 7.2 version + Jewel version 10.2.1)
-- Client Nodes - Ubuntu 14.04 LTS
Admin Node:
[root@Admin ceph]# ceph mds stat
e34: 0/0/1 up
Client Side:
user@clientA2:/etc/ceph$ ceph fs ls --name client.admin
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
user@clientA2:/etc/ceph$ sudo mount -t ceph 10.10.100.5:6789:/user
/home/user/cephfs -o name=admin,secret=AQAQK1NXgupKIRAA9O7fKxadI/iIq/vPKLI9rw==
mount error 5 = Input/output error
Connection Establishment was successful to monitor node.
$tail -f /var/log/syslog
Jun 14 16:32:24 clientA2 kernel: [82270.155030] libceph: client134154 fsid
66c5f31c-1756-47ce-889d-960e0d99f37a
Jun 14 16:32:24 clientA2 kernel: [82270.156726] libceph: mon0 10.10.100.5:6789
session established
Able to check ceph health status from client node with client.admin keyring.:
user@clientA2:/etc/ceph$ ceph -s --name client.admin
cluster 66c5f31c-1756-47ce-889d-960e0d99f37a
health HEALTH_OK
monmap e6: 3 mons at
{siteAmon=10.10.100.5:6789/0,siteBmon=10.10.150.6:6789/0,siteCmon=10.10.200.7:6789/0}
election epoch 70, quorum 0,1,2 siteAmon,siteBmon,siteCmon
fsmap e34: 0/0/1 up
osdmap e1097: 19 osds: 19 up, 19 in
flags sortbitwise
pgmap v25719: 1286 pgs, 5 pools, 92160 kB data, 9 objects
3998 MB used, 4704 GB / 4708 GB avail
1286 active+clean
Can anyone please help with solution for above issue.
Thanks
Rakesh Parkiti
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com