Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer chibi@... writes: So either make sure these pools really have a replication of 2 by deleting and re-creating them or add a third storage node. I just executed ceph osd pool set {POOL} size 2 for both pools. Anything else I need to do? I still don't see any changes to the

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Gregory Farnum greg@... writes: On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett brian.lovett@... wrote: profile: bobtail, Okay. That's unusual. What's the oldest client you need to support, and what Ceph version are you using? You probably want to set the crush tunables to optimal

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer chibi@... writes: Read EVERYTHING you can find about crushmap rules. The quickstart (I think) talks about 3 storage nodes, not OSDs. Ceph is quite good when it comes to defining failure domains, the default is to segregate at the storage node level. What good is a

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Alright, I was finally able to get this resolved without adding another node. As pointed out, even though I had a config variable that defined the default replicated size at 2, ceph for some reason created the default pools (data, and metadata) with a value of 3. After digging trough

[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
I'm pulling my hair out with ceph. I am testing things with a 5 server cluster. I have 3 monitors, and two storage machines each with 4 osd's. I have started from scratch 4 times now, and can't seem to figure out how to get a clean status. Ceph health reports: HEALTH_WARN 34 pgs degraded; 192

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Brian Lovett brian.lovett@... writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in Why would that be? ___ ceph-users mailing list ceph-users

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: What's the output of ceph osd map? Your CRUSH map probably isn't trying to segregate properly, with 2 hosts and 4 OSDs each. Software Engineer #42 at http://inktank.com | http://ceph.com Is this what you are looking for? ceph osd map rbd ceph osdmap

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: ...and one more time, because apparently my brain's out to lunch today: ceph osd tree *sigh* haha, we all have those days. [root@monitor01 ceph]# ceph osd tree # idweight type name up/down reweight -1 14.48 root default -2 7.24

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: So those disks are actually different sizes, in proportion to their weights? It could be having an impact on this, although it *shouldn't* be an issue. And your tree looks like it's correct, which leaves me thinking that something is off about your crush rules.

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum greg@... writes: On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett brian.lovett@... wrote: profile: bobtail, Okay. That's unusual. What's the oldest client you need to support, and what Ceph version are you using? This is a fresh install (as of today) running