Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-21 Thread Jiri Kanicky
Hi, BTW, is there a way how to achieve redundancy over multiple OSDs in one box by changing CRUSH map? Thank you Jiri On 20/01/2015 13:37, Jiri Kanicky wrote: Hi, Thanks for the reply. That clarifies it. I thought that the redundancy can be achieved with multiple OSDs (like multiple disks

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-21 Thread Jiri Kanicky
Hi, Thanks for the reply. That clarifies it. I thought that the redundancy can be achieved with multiple OSDs (like multiple disks in RAID) in case you don't have more nodes. Obviously the single point of failure would be the box. My current setting is: osd_pool_default_size = 2 Thank you J

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-21 Thread Lindsay Mathieson
You only have one osd node (ceph4). The default replication requirements for your pools (size = 3) require osd's spread over three nodes, so the data can be replicate on three different nodes. That will be why your pgs are degraded. You need to either add mode osd nodes or reduce your size setting

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Jiri Kanicky
Hi. I am just curious. This is just lab environment and we are short on hardware :). We will have more hardware later, but right now this is all I have. Monitors are VMs. Anyway, we will have to survive with this somehow :). Thanks Jiri On 20/01/2015 15:33, Lindsay Mathieson wrote: On 20

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Lindsay Mathieson
On 20 January 2015 at 14:10, Jiri Kanicky wrote: > Hi, > > BTW, is there a way how to achieve redundancy over multiple OSDs in one > box by changing CRUSH map? > I asked that same question myself a few weeks back :) The answer was yes - but fiddly and why would you do that? Its kinda breakin

[ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Jiri Kanicky
Hi, I just would like to clarify if I should expect degraded PGs with 11 OSD in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks) nodes allows me to have healthy cluster. $ sudo ceph osd pool create test 512 pool 'test' created $ sudo ceph status cluster 4e77327a-118d-45