hello, I am a new user of ceph,
I have built a ceph testing Environment for block storage,
I have 2 osd and 2 monitor,In addition to failover test, other tests are normal.
when I perform failover test, if I stop one osd , the cluster is OK,
but if I stop one monitor , the cluster have entire die , why ? thank you.
my configure file :
; global
[global]
; enable secure authentication
; auth supported = cephx
auth cluster required = none
auth service required = none
auth client required = none
mon clock drift allowed = 3
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /home/ceph/mon$id
; some minimal logging (just message traffic) to aid debugging
debug ms = 1
[mon.0]
host = sheepdog1
mon addr = 192.168.0.19:6789
[mon.1]
mon data = /var/lib/ceph/mon.$id
host = sheepdog2
mon addr = 192.168.0.219:6789
; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /home/ceph/keyring.mds.$id
[mds.0]
host = sheepdog1
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
; This is where the btrfs volume will be mounted.
osd data = /home/ceph/osd.$id
osd journal = /home/ceph/osd.$id/journal
osd journal size = 512
; working with ext4
filestore xattr use omap = true
; solve rbd data corruption
filestore fiemap = false
[osd.0]
host = sheepdog1
osd data = /var/lib/ceph/osd/diskb
osd journal = /var/lib/ceph/osd/diskb/journal
[osd.2]
host = sheepdog2
osd data = /var/lib/ceph/osd/diskc
osd journal = /var/lib/ceph/osd/diskc/journal_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com