I am not sure about my judgments but please help me out understanding
the result of following experiment carried out : -
Experiment : (3 node cluster, all have ceph v.0.48.2agonaut( after
building ceph from src code) + UBUNTU 12.04 + kernel 3.2.0 )
----------------------------------------------------------------------------------------------------------------------------------------------------------
VM1 ( mon.0+ osd.0 +mds.0 ) VM2 (mon.1 + osd.1 + mds.1) VM3(mon.2)
- Cluster is up and HEALTH_OK
- Replication factor is 2. (by default all pools have
replication factor set to 2)
- After mounting "mount.ceph mon_addr:port :/ ~/cephfs "
, I created file inside mounted Dir "cephfs" .
- And able to see data on both OSD i.e. VM1(osd.0) and on
VM2(osd.1) as well as file is accessible .
- Then VM2 is made down & VM2 absence is verified with ceph -s .
- Even after VM1(osd.0 mds.0 mon.0) + VM3 (mon.2) was
live , I am unable to access the file .
- I tried to remount the data on different Dir with
mount.ceph currently_live_mons:/ /home/hemant/xyz
- Even after that I was unable to access the file stored
on cluster.
----------------------------------------------------------------------------------------------------------------------------------------------------------
-
Hemant Surale.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html