Hi again,

I'm trying to mount my new Ceph volume on a remote PC using cephfs. I've followed the quick start guide, but when I try to mount the filesystem I get this:

remote$ mount -t ceph 192.168.0.6:6789:/ /mnt/ceph/
mount: 192.168.0.6:6789:/: can't read superblock

remote$ dmesg | tail
[951382.981690] libceph: client4105 fsid aa447ff8-8270-491b-b59e-2735e852eaf5
[951382.983486] libceph: mon0 192.168.0.6:6789 session established

I'm not sure what the problem is. The osd, mon and mds daemons are running on the Ceph host and no traffic is firewalled between the two machines. I have disabled authentication in ceph.conf. The only issue I can see is this:

cephhost$ ceph health
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean

I'm not sure what that means as I have only used mkcephfs and then started the ceph service. I haven't actually used it yet so I'm not sure where the unclean pgs are coming from. The underlying filesystem is ext4 though.

I'm a bit stuck as to what to do next. I can't see why the filesystem won't mount remotely, and the troubleshooting docs say unclean pgs are related to outages which doesn't seem possible on a newly created volume with only one osd/mon/mds instance.

Any pointers would be much appreciated!

Many thanks,
Adam.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to