I'm not sure why you are having such a hard time. I added monitors (and
removed them) on CentOS 7 by following what I had. The thing that kept
tripping me up was firewalld. Once I either shut it off or created a
service for Ceph, it worked fine.

What is in in /var/log/ceph/ceph-mon.tauro.log when it is hunting for a
monitor?

On Thu, Mar 12, 2015 at 2:31 PM, Jesus Chavez (jeschave) <[email protected]
> wrote:

>  Hi Steffen I already had them in my configuration [image: 😞] I am
> stress now because it seems like none of the methods did help :( this is
> bad I think I am going to get back to rhel6.6 where xfs is a damn add on
> and I have to install from centos repo make ceph like patch :( but at last
> with RHEL6.6 work, Shame on RHEL7 the next time I will sell everything with
> ubuntu lol
>
>
> * Jesus Chavez*
> SYSTEMS ENGINEER-C.SALES
>
> [email protected]
> Phone: *+52 55 5267 3146 <+52%2055%205267%203146>*
> Mobile: *+51 1 5538883255 <+51%201%205538883255>*
>
> CCIE - 44433
>
> On Mar 12, 2015, at 1:56 PM, Steffen W Sørensen <[email protected]> wrote:
>
>
>  On 12/03/2015, at 20.00, Jesus Chavez (jeschave) <[email protected]>
> wrote:
>
>  Thats what I thought and did actually the monmap and keyring were copied
> to the new monitor and there with 2 elements I did the mkfs thing and still
> have that Messages, do I need osd configured?  Because I have non and I am
> not sure if it is requiered ... Also is weird that monmap is not taking the
> new monitor I think I should try to configure the 3 monitors as initial
> monitors an see how it goes
>
> Dunno about your config, but I seem to remember when I decommissioned one
> mon instance and addition of a new on another node that I needed to have
> mon.<id> section in ceph.conf inorder to be able to start the monitor.
>
> ceph.conf snippet:
>
>   [osd]
>  osd mount options xfs =
> "rw,noatime,nobarrier,logbsize=256k,logbufs=8,allocsize=4M,attr2,delaylog,inode64,noquota"
>  keyring = /var/lib/ceph/osd/ceph-$id/keyring
>  ; Tuning
>           ;# By default, Ceph makes 3 replicas of objects. If you want to
> make four
>  ;# copies of an object the default value--a primary copy and three
> replica
>  ;# copies--reset the default values as shown in 'osd pool default size'.
>  ;# If you want to allow Ceph to write a lesser number of copies in a
> degraded
>  ;# state, set 'osd pool default min size' to a number less than the
>  ;# 'osd pool default size' value.
>
>  osd pool default size = 2  # Write an object 2 times.
>  osd pool default min size = 1 # Allow writing one copy in a degraded
> state.
>
>  ;# Ensure you have a realistic number of placement groups. We recommend
>  ;# approximately 100 per OSD. E.g., total number of OSDs multiplied by
> 100
>  ;# divided by the number of replicas (i.e., osd pool default size). So
> for
>  ;# 10 OSDs and osd pool default size = 3, we'd recommend approximately
>  ;# (100 * 10) / 3 = 333.
>
>  ;# got 24 OSDs => 1200 pg, but this is not a full production site, so
> let's settle for 1024 to lower cpu load
>  osd pool default pg num = 1024
>  osd pool default pgp num = 1024
>
>  client cache size = 131072
>  osd client op priority = 40
>  osd op threads = 8
>  osd client message size cap = 512
>  filestore min sync interval = 10
>  filestore max sync interval = 60
>  ;filestore queue max bytes = 10485760
>  ;filestore queue max ops = 50
>  ;filestore queue committing max ops = 500
>  ;filestore queue committing max bytes = 104857600
>  ;filestore op threads = 2
>  recovery max active = 2
>  recovery op priority = 30
>  osd max backfills = 2
>  ; Journal Tuning
>  journal size = 5120
>  ;journal max write bytes = 1073714824
>  ;journal max write entries = 10000
>  ;journal queue max ops = 50000
>  ;journal queue max bytes = 10485760000
>
>
>
>
>  [mon.0]
>  host = node4
>  mon addr = 10.0.3.4:6789
>
>  [mon.1]
>  host = node2
>  mon addr = 10.0.3.2:6789
>
>  [mon.2]
>  host = node1
>  mon addr = 10.0.3.1:6789
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to