Re: [ceph-users] Replace all monitors

2013-08-10 Thread Olivier Bonvalet
Le jeudi 08 août 2013 à 18:04 -0700, Sage Weil a écrit :
 On Fri, 9 Aug 2013, Olivier Bonvalet wrote:
  Le jeudi 08 ao?t 2013 ? 09:43 -0700, Sage Weil a ?crit :
   On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
Hi,

from now I have 5 monitors which share slow SSD with several OSD
journal. As a result, each data migration operation (reweight, recovery,
etc) is very slow and the cluster is near down.

So I have to change that. I'm looking to replace this 5 monitors by 3
new monitors, which still share (very fast) SSD with several OSD.
I suppose it's not a good idea, since monitors should have a dedicated
storage. What do you think about that ?
Is it a better practice to have dedicated storage, but share CPU with
Xen VM ?
   
   I think it's okay, as long as you aren't wroried about the device filling 
   up and the monitors are on different hosts.
  
  Not sure to understand : by ?dedicated storage?, I was talking of the
  monitor. Can I put monitors on Xen ?host?, if they have dedicated
  storage ?
 
 Yeah, Xen would work fine here, although I'm not sure it is necessary.  
 Just putting /var/lib/mon on a different storage device will probably be 
 the most important piece.  It sounds like it is storage contention, and 
 not CPU contention, that is the source of your problems.
 
 sage
 

Yop, the transition worked fine, thanks ! Newer mon are really fasters,
and now I can migrate data without downtime. Good job devs !

Thanks again.

Olivier

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Replace all monitors

2013-08-08 Thread Olivier Bonvalet
Hi,

from now I have 5 monitors which share slow SSD with several OSD
journal. As a result, each data migration operation (reweight, recovery,
etc) is very slow and the cluster is near down.

So I have to change that. I'm looking to replace this 5 monitors by 3
new monitors, which still share (very fast) SSD with several OSD.
I suppose it's not a good idea, since monitors should have a dedicated
storage. What do you think about that ?
Is it a better practice to have dedicated storage, but share CPU with
Xen VM ?

Second point, I'm not sure how to do that migration, without downtime.
I was hoping to add the 3 new monitors, then progressively remove the 5
old monitors, but in the doc [1] indicate a special procedure for
unhealthy cluster, which seem to be for clusters with damaged monitors,
right ? In my case I only have dead PG [2] (#5226), from which I can't
recover, but monitors are fine. Can I use the standard procedure ?

Thanks,
Olivier

[1] 
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors
[2] http://tracker.ceph.com/issues/5226

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replace all monitors

2013-08-08 Thread Sage Weil
On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
 Hi,
 
 from now I have 5 monitors which share slow SSD with several OSD
 journal. As a result, each data migration operation (reweight, recovery,
 etc) is very slow and the cluster is near down.
 
 So I have to change that. I'm looking to replace this 5 monitors by 3
 new monitors, which still share (very fast) SSD with several OSD.
 I suppose it's not a good idea, since monitors should have a dedicated
 storage. What do you think about that ?
 Is it a better practice to have dedicated storage, but share CPU with
 Xen VM ?

I think it's okay, as long as you aren't wroried about the device filling 
up and the monitors are on different hosts.

 Second point, I'm not sure how to do that migration, without downtime.
 I was hoping to add the 3 new monitors, then progressively remove the 5
 old monitors, but in the doc [1] indicate a special procedure for
 unhealthy cluster, which seem to be for clusters with damaged monitors,
 right ? In my case I only have dead PG [2] (#5226), from which I can't
 recover, but monitors are fine. Can I use the standard procedure ?

The 'healthy' caveat in this case is about the monitor cluster; teh 
special procedure is only needed if you don't have enough healthy mons to 
form a  quorum.  The normal procedure should work just fine.

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replace all monitors

2013-08-08 Thread Olivier Bonvalet
Le jeudi 08 août 2013 à 09:43 -0700, Sage Weil a écrit :
 On Thu, 8 Aug 2013, Olivier Bonvalet wrote:
  Hi,
  
  from now I have 5 monitors which share slow SSD with several OSD
  journal. As a result, each data migration operation (reweight, recovery,
  etc) is very slow and the cluster is near down.
  
  So I have to change that. I'm looking to replace this 5 monitors by 3
  new monitors, which still share (very fast) SSD with several OSD.
  I suppose it's not a good idea, since monitors should have a dedicated
  storage. What do you think about that ?
  Is it a better practice to have dedicated storage, but share CPU with
  Xen VM ?
 
 I think it's okay, as long as you aren't wroried about the device filling 
 up and the monitors are on different hosts.

Not sure to understand : by «dedicated storage», I was talking of the
monitor. Can I put monitors on Xen «host», if they have dedicated
storage ?

 
  Second point, I'm not sure how to do that migration, without downtime.
  I was hoping to add the 3 new monitors, then progressively remove the 5
  old monitors, but in the doc [1] indicate a special procedure for
  unhealthy cluster, which seem to be for clusters with damaged monitors,
  right ? In my case I only have dead PG [2] (#5226), from which I can't
  recover, but monitors are fine. Can I use the standard procedure ?
 
 The 'healthy' caveat in this case is about the monitor cluster; teh 
 special procedure is only needed if you don't have enough healthy mons to 
 form a  quorum.  The normal procedure should work just fine.
 

Great, thanks !


 sage
 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com