Re: [ceph-users] ceph degraded writes

2016-05-03 Thread Gregory Farnum
On Tue, May 3, 2016 at 4:10 PM, Ben Hines wrote: > The Hammer .93 to .94 notes said: > If upgrading from v0.93, setosd enable degraded writes = false on all osds > prior to upgrading. The degraded writes feature has been reverted due to > 11155. > > Our cluster is now on

[ceph-users] ceph degraded writes

2016-05-03 Thread Ben Hines
The Hammer .93 to .94 notes said: If upgrading from v0.93, setosd enable degraded writes = false on all osds prior to upgrading. The degraded writes feature has been reverted due to 11155. Our cluster is now on Infernalis 9.2.1 and we still have this setting set. Can we get rid of it? Was this

Re: [ceph-users] Ceph Degraded

2014-12-01 Thread Georgios Dimitrakakis
November, 2014 11:13:05 AM SUBJECT: [ceph-users] Ceph Degraded Hi all! I am setting UP a new cluster with 10 OSDs and the state is degraded! # ceph health HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean # There are only the default pools # ceph osd lspools 0 data,1 metadata,2 rbd, with each

[ceph-users] Ceph Degraded

2014-11-29 Thread Georgios Dimitrakakis
Hi all! I am setting UP a new cluster with 10 OSDs and the state is degraded! # ceph health HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean # There are only the default pools # ceph osd lspools 0 data,1 metadata,2 rbd, with each one having 512 pg_num and 512 pgp_num # ceph osd dump

Re: [ceph-users] Ceph Degraded

2014-11-29 Thread Andrei Mikhailovsky
- From: Georgios Dimitrakakis gior...@acmac.uoc.gr To: ceph-users@lists.ceph.com Sent: Saturday, 29 November, 2014 11:13:05 AM Subject: [ceph-users] Ceph Degraded Hi all! I am setting UP a new cluster with 10 OSDs and the state is degraded! # ceph health HEALTH_WARN 940 pgs degraded