Hi,
Thanks for the link.  

I unset the nodown config option and things did seem to improve, although we 
did still get a few reports from users about issues related filesystem (rbd) 
access, even after that action was taken.

Thanks again,

Shain 

> On Mar 13, 2017, at 2:43 AM, Alexandre DERUMIER <aderum...@odiso.com> wrote:
> 
> Hi,
> 
>>> Currently I have the. noout and nodown flags set while doing the 
>>> maintenance work.
> 
> you only need noout to avoid rebalancing
> 
> see documentation:
> http://docs.ceph.com/docs/kraken/rados/troubleshooting/troubleshooting-osd/
> "STOPPING W/OUT REBALANCING".
> 
> 
> Your clients are hanging because of the no down flag
> 
> 
> See this blog for no-out, no-down flags experiements
> 
> https://www.sebastien-han.fr/blog/2013/04/17/some-ceph-experiments/
> 
> ----- Mail original -----
> De: "Shain Miley" <smi...@npr.org>
> À: "ceph-users" <ceph-us...@ceph.com>
> Envoyé: Lundi 13 Mars 2017 04:58:08
> Objet: [ceph-users] noout, nodown and blocked requests
> 
> Hello, 
> One of the nodes in our 14 node cluster is offline and before I totally 
> commit to fully removing the node from the cluster (there is a chance I can 
> get the node back in working order in the next few days) I would like to run 
> the cluster with that single node out for a few days. 
> 
> Currently I have the. noout and nodown flags set while doing the maintenance 
> work. 
> 
> Some users are complaining about disconnects and other oddities when try to 
> save and access files currently on the cluster. 
> 
> I am also seeing some blocked requests when viewing the cluster status (at 
> this point I see 160 block requests spread over 15 to 20 osd’s). 
> 
> Currently I have a replication level of 3 on this pool and a min_size of 1. 
> 
> My question is this…is there a better method to use (other than using noout 
> and nodown) in this scenario where I do not want data movement yet…but I do 
> want the reads and writes to the cluster to to respond as normally as 
> possible for the end users? 
> 
> Thanks in advance, 
> 
> Shain 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to