Hi Sheng,

On Fri, 12 Mar 2010, sheng zheng wrote:

> hi£¬all
>   when i install a test cluster with ceph-0.19-tar.gz,it runs abnormally.
>  1) when i run mkmonfs, the follwing arise:
>   mkmonfs -i 0 --mon-data mondata/mon0 --monmap monmap --osdmap osdmap
> 10.03.12 17:07:28.542772 store(mondata/mon0) mkfs
> 10.03.12 17:07:28.542932 store(mondata/mon0) test -d mondata/mon0 && /bin/rm
> -rf mondata/mon0 ; mkdir -p mondata/mon0
> 10.03.12 17:07:28.558756 mon0(starting).class v0 create_initial -- creating
> initial map
> 10.03.12 17:07:28.562110 mon0(starting).auth v0 create_initial -- creating
> initial map
> 10.03.12 17:07:28.562143 mon0(starting).auth v0 reading initial keyring
> can't open ~/.ceph/keyring.bin, /etc/ceph/keyring.bin, .ceph_keyring: No
> such file or directory
> can't open ~/.ceph/keyring.bin, /etc/ceph/keyring.bin, .ceph_keyring: No
> such file or directory
> can't open ~/.ceph/keyring.bin, /etc/ceph/keyring.bin, .ceph_keyring: No
> such file or directory
> mkmonfs: created monfs at mondata/mon0 for mon0

The keyring.bin warnings can be ignored if you're not using the cephx 
authentication (i.e., if you don't have

        auth supported = cephx

in your ceph.conf).  If you _do_ have it enabled, you need to add '-k 
adminkeyring.bin' to the mkcephfs command line so that you get a copy of 
the admin key to administer the cluster.  But I'd recommend just leaving 
authentication off for now!

> 2) when i  mount -t ceph 192.168.1.202:/ /mnt/ceph,the follwing arise:
> mount: 192.168.1.202:/: can't read superblock
> dmesg:
>      [ 1581.804768] ceph: client4100 fsid
> 07bd01d9-93b0-8928-1d87-cb3bb0d76684
> [ 1581.804847] ceph: mon0 192.168.1.202:6789 session established

Probably the MDS isn't letting the client open the root inode.  That may 
be because the MDS didn't start, or because the OSDs aren't up (without 
which the MDS can't do anything useful).  The output from 'ceph -w' would 
narrow it down.

sage
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to