Hi Srinivasa,
I met this problem at beginning and I made my recipe to achieve the "initial monitor" working

I hope that it can be useful to you ( is a recipe, it is not a manual please pay attention on the path and the ways that I followed):

1) ssh into the node
2) verify that the directory /etc/ceph exist create it if not
3) create a ceph.conf file (ceph is the cluster name)
    a) this is a good base
        [global]
    auth service required = cephx
    filestore xattr use omap = true
    auth client required = cephx
    auth cluster required = cephx
    mon host = omissis ##### TO CHANGE ON NEW CLUSTER
mon initial members = `hostname -s` ##### TO CHANGE ON NEW CLUSTER EG ds-07-01 public network = "newtorkddress".0/22 ##### TO CHANGE ON NEW CLUSTER E.G 192.168.0.0/22
    fsid = "put the generated one" ##### TO CHANGE ON NEW CLUSTER

    ####### OSD GLOBAL ##############
    #### REPLICA number ####
    osd pool default size = 1
    #### # Allow writing one copy in a degraded state
    osd pool default min size = 1

    # Ensure you have a realistic number of placement groups. We recommend
# approximately 100 per OSD. E.g., total number of OSDs multiplied by 100 # divided by the number of replicas (i.e., osd pool default size). So for
    # 10 OSDs and osd pool default size = 3, we'd recommend approximately
    # (100 * 10) / 3 = 333.
    # (100 * 20) / 1 = 2000
    # OSD = 20 --> 2000 ---> 2048

    osd pool default pg num = 2048
    osd pool default pgp num = 2048
[mon.`hostname -s`] ######### MODIFY
    host= hostname
    mon addr = addr

4) change the fsid with a new one
    a) launch -> uuidgen
5) Create a keyring for your cluster and generate a monitor secret key.
a) ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    b) copy into your administration cluster directory

6) Generate an administrator keyring, generate a client.admin user and add the user to the keyring. a) ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
    b) copy into your administration cluster directory
7) Add the client.admin key to the ceph.mon.keyring.
a) ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring 8) Generate a monitor map using the hostname(s), host IP address(es) and the FSID. Save it as /tmp/monmap a) monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
9) Create a default data directory (or directories) on the monitor host(s).
    a) mkdir -p /var/lib/ceph/mon/{cluster-name}-{hostname}
10) Populate the monitor daemon(s) with the monitor map and keyring.
a) ceph-mon --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
11) add the mon.{hostname} section into the ceph.con file
12) start the initial monitor


############## ADDING ANOTHER MONITOR MANUAL WAY ###############
MUST BE AN ACTIVE and working MONITOR FOR THIS PROCEDURE

NOTE: ensure that on the new monitor there aren't ANY ceph-mon or ceph-create-keys active IF there are KILL ALL and ENSURE THAT THEY ARE KILLED

1) push the client.admin.keyring to the new host where the monitor will be added
1a) push configuration to new host
1b) add the new mon definition to conf file
[mon.name]
    host=
    mon addr =
2) ssh into the new node
3) sudo mkdir -p /var/lib/ceph/mon/ceph-{mon-id}
4) create a temporary directory e.g. /tmp
5) retrieve the keyring for the monitors
    a) cd "temporary directory"
b) ceph auth get mon. -o {tmp}/{filename} eg ceph auth get mon. -o ./mon.keyring
6) retrieve the mon map
    a) ceph mon getmap -o {tmp}/{filename}
7) create the fs for mon into the dir at 3)
a) ceph-mon -i ds-07-02 --mkfs --monmap ./mon.map --keyring ./mon.keyring 8) Add the new monitor to the list of monitors for you cluster (runtime). This enables other nodes to use this monitor during their initial startup.
    a) ceph mon add <mon-id> <ip>[:<port>]
    b) the proccess will hang
9) in ANOTHER SHELL LAUNCH:
    a) ceph-mon -i ds-07-02 --public-addr {address of new monitor}


the process at 8 will be released

REMEMBER TO SHARE CONFIGURATION TO ALL HOST WHEN DONE

I hope to be usefull

Regards
Matteo

Il 11/04/2014 12:12, Srinivasa Rao Ragolu ha scritto:
Hi All,


On our private distribution, I have compiled ceph and could able to install
ceph.

Now I have added /etc/ceph/ceph.conf as

[global]
fsid = e5a14ff4-148a-473a-8721-53bda59c74a2
mon initial members = mon
mon host = 192.168.0.102
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1


after this step I have followed instructions given for manual deployment of
monitor node http://ceph.com/docs/master/install/manual-deployment/

root@x86-generic-64:/etc/ceph#








*ceph start mon.mon2014-04-11 10:10:43.504120 7f3800870700  0 -- :/1002663
192.168.0.102:6789/0 <http://192.168.0.102:6789/0> pipe(0x7f37fc025400
sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f37fc025660).fault2014-04-11
10:10:46.505283 7f380076f700  0 -- :/1002663 >> 192.168.0.102:6789/0
<http://192.168.0.102:6789/0> pipe(0x7f37ec000c00 sd=3 :0 s=1 pgs=0 cs=0
l=1 c=0x7f37ec000e60).fault2014-04-11 10:10:49.505715 7f3800870700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec003010 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f37ec003270).fault2014-04-11 10:10:52.506090 7f380076f700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec003b10 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f37ec003d70).fault2014-04-11 10:10:55.506699 7f3800870700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec0027e0 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f37ec003910).fault2014-04-11 10:10:58.507132 7f380076f700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec002f10 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f37ec003170).fault2014-04-11 10:11:01.507715 7f3800870700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec0053b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f37ec005610).fault2014-04-11 10:11:04.508198 7f380076f700  0 --
:/1002663 >> 192.168.0.102:6789/0 <http://192.168.0.102:6789/0>
pipe(0x7f37ec007050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f37ec0072b0).fault *
Any ceph commands like ceph, ceph status etc giving same errors like above.

Please help me how can I resolve the above errors.

Thanks in advance,
Srinivas.



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to