Hi All,


   We are using ceph with two OSD and three clients. Clients try to mount with 
OCFS2 file system. Here when i start mounting only two clients i can able to 
mount properly and third client giving below errors. Some time i can able to 
mount third client but data not sync to third client.





mount /dev/rbd/rbd/integdownloads /soho/build/downloads



mount.ocfs2: Invalid argument while mounting /dev/rbd0 on 
/soho/build/downloads. Check 'dmesg' for more information on this error.



dmesg



[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node 
mismatch -22, node 0

[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status 
= -22

[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22

[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = 
-22

[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = 
-22

[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22

[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = 
-22

[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)

[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22







OCFS2 configuration



cluster:

       node_count=3

       heartbeat_mode = local

       name=ocfs2



node:

        ip_port = 7777

        ip_address = 192.168.112.192

        number = 0

        name = integ-hm5

        cluster = ocfs2

node:

        ip_port = 7777

        ip_address = 192.168.113.42

        number = 1

        name = integ-soho

        cluster = ocfs2

node:

        ip_port = 7778

        ip_address = 192.168.112.115

        number = 2

        name = integ-hm2

        cluster = ocfs2



Ceph configuration

# ceph -s

    cluster 944fa0af-b7be-45a9-93ff-b9907cfaee3f

     health HEALTH_OK

     monmap e2: 3 mons at 
{integ-hm5=192.168.112.192:6789/0,integ-hm6=192.168.112.193:6789/0,integ-hm7=192.168.112.194:6789/0}

            election epoch 54, quorum 0,1,2 integ-hm5,integ-hm6,integ-hm7

     osdmap e10: 2 osds: 2 up, 2 in

      pgmap v32626: 64 pgs, 1 pools, 10293 MB data, 8689 objects

            14575 MB used, 23651 GB / 24921 GB avail

                  64 active+clean

  client io 2047 B/s rd, 1023 B/s wr, 2 op/s





Regards

GJ









_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to