Hi David,

 

I have had a few weird issues when shutting down a node, although I can 
replicate it by doing a “stop ceph-all” as well. It seems that OSD failure 
detection takes a lot longer when a monitor goes down at the same time, 
sometimes I have seen the whole cluster grind to a halt for several minutes 
before it works out whats happened.

 

If I stop the either role and wait for it to be detected as failed and then do 
the next role, I don’t see the problem. So it might be something to keep in 
mind when doing maintenance.

 

Nick

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David 
Graham
Sent: 10 February 2015 17:07
To: ceph-us...@ceph.com
Subject: [ceph-users] combined ceph roles

 

Hello, I'm giving thought to a minimal footprint scenario with full redundancy. 
I realize it isn't ideal--and may impact overall performance --  but wondering 
if the below example would work, supported, or known to cause issue?

Example, 3x hosts each running:
-- OSD's
-- Mon
-- Client



I thought I read a post a while back about Client+OSD on the same host possibly 
being an issue -- but i am having difficulty finding that reference.

I would appreciate if anyone has insight into such a setup,

thanks!
















_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to