Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread dE

On 10/15/2017 03:13 AM, Denes Dolhay wrote:


Hello,

Could you include the monitors and the osds as well to your clock skew 
test?


How did you create the osds? ceph-deploy osd create osd1:/dev/sdX 
osd2:/dev/sdY osd3: /dev/sdZ ?


Some log from one of the osds would be great!


Kind regards,

Denes.


On 10/14/2017 07:39 PM, dE wrote:

On 10/14/2017 08:18 PM, David Turner wrote:


What are the ownership permissions on your osd folders? Clock skew 
cares about partial seconds.


It isn't the networking issue because your cluster isn't stuck 
peering. I'm not sure if the creating state happens in disk or in 
the cluster.



On Sat, Oct 14, 2017, 10:01 AM dE . > wrote:


I attached 1TB disks to each osd.

cluster 8161c90e-dbd2-4491-acf8-74449bef916a
 health HEALTH_ERR
    clock skew detected on mon.1, mon.2

    64 pgs are stuck inactive for more than 300 seconds
    64 pgs stuck inactive
    too few PGs per OSD (21 < min 30)
    Monitor clock skew detected
 monmap e1: 3 mons at
{0=10.247.103.139:8567/0,1=10.247.103.140:8567/0,2=10.247.103.141:8567/0

}
    election epoch 12, quorum 0,1,2 0,1,2
 osdmap e10: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
  pgmap v38: 64 pgs, 1 pools, 0 bytes data, 0 objects
    33963 MB used, 3037 GB / 3070 GB avail
  64 creating

I dont seem to have any clock skews --
or i in {139..141}; do ssh $i date +%s; done
1507989554
1507989554
1507989554


On Sat, Oct 14, 2017 at 6:41 PM, David Turner
> wrote:

What is the output of your `ceph status`?


On Fri, Oct 13, 2017, 10:09 PM dE > wrote:

On 10/14/2017 12:53 AM, David Turner wrote:

What does your environment look like?  Someone recently
on the mailing list had PGs stuck creating because of a
networking issue.

On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen
> wrote:

strange that no osd is acting for your pg's
can you show the output from
ceph osd tree


mvh
Ronny Aasen



On 13.10.2017 18:53, dE wrote:
> Hi,
>
>     I'm running ceph 10.2.5 on Debian (official
package).
>
> It cant seem to create any functional pools --
>
> ceph health detail
> HEALTH_ERR 64 pgs are stuck inactive for more
than 300 seconds; 64 pgs
> stuck inactive; too few PGs per OSD (21 < min 30)
> pg 0.39 is stuck inactive for 652.741684, current
state creating, last
> acting []
> pg 0.38 is stuck inactive for 652.741688, current
state creating, last
> acting []
> pg 0.37 is stuck inactive for 652.741690, current
state creating, last
> acting []
> pg 0.36 is stuck inactive for 652.741692, current
state creating, last
> acting []
> pg 0.35 is stuck inactive for 652.741694, current
state creating, last
> acting []
> pg 0.34 is stuck inactive for 652.741696, current
state creating, last
> acting []
> pg 0.33 is stuck inactive for 652.741698, current
state creating, last
> acting []
> pg 0.32 is stuck inactive for 652.741701, current
state creating, last
> acting []
> pg 0.3 is stuck inactive for 652.741762, current
state creating, last
> acting []
> pg 0.2e is stuck inactive for 652.741715, current
state creating, last
> acting []
> pg 0.2d is stuck inactive for 652.741719, current
state creating, last
> acting []
> pg 0.2c is stuck inactive for 652.741721, current
state creating, last
> acting []
> pg 0.2b is stuck inactive for 652.741723, current
state creating, last
> acting []
> pg 0.2a is stuck inactive for 652.741725, current
state creating, last
> acting []
> pg 0.29 is stuck inactive for 652.741727, current

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread Denes Dolhay

Hello,

Could you include the monitors and the osds as well to your clock skew test?

How did you create the osds? ceph-deploy osd create osd1:/dev/sdX 
osd2:/dev/sdY osd3: /dev/sdZ ?


Some log from one of the osds would be great!


Kind regards,

Denes.


On 10/14/2017 07:39 PM, dE wrote:

On 10/14/2017 08:18 PM, David Turner wrote:


What are the ownership permissions on your osd folders? Clock skew 
cares about partial seconds.


It isn't the networking issue because your cluster isn't stuck 
peering. I'm not sure if the creating state happens in disk or in the 
cluster.



On Sat, Oct 14, 2017, 10:01 AM dE . > wrote:


I attached 1TB disks to each osd.

cluster 8161c90e-dbd2-4491-acf8-74449bef916a
 health HEALTH_ERR
    clock skew detected on mon.1, mon.2

    64 pgs are stuck inactive for more than 300 seconds
    64 pgs stuck inactive
    too few PGs per OSD (21 < min 30)
    Monitor clock skew detected
 monmap e1: 3 mons at
{0=10.247.103.139:8567/0,1=10.247.103.140:8567/0,2=10.247.103.141:8567/0

}
    election epoch 12, quorum 0,1,2 0,1,2
 osdmap e10: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
  pgmap v38: 64 pgs, 1 pools, 0 bytes data, 0 objects
    33963 MB used, 3037 GB / 3070 GB avail
  64 creating

I dont seem to have any clock skews --
or i in {139..141}; do ssh $i date +%s; done
1507989554
1507989554
1507989554


On Sat, Oct 14, 2017 at 6:41 PM, David Turner
> wrote:

What is the output of your `ceph status`?


On Fri, Oct 13, 2017, 10:09 PM dE > wrote:

On 10/14/2017 12:53 AM, David Turner wrote:

What does your environment look like?  Someone recently
on the mailing list had PGs stuck creating because of a
networking issue.

On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen
> wrote:

strange that no osd is acting for your pg's
can you show the output from
ceph osd tree


mvh
Ronny Aasen



On 13.10.2017 18:53, dE wrote:
> Hi,
>
>     I'm running ceph 10.2.5 on Debian (official
package).
>
> It cant seem to create any functional pools --
>
> ceph health detail
> HEALTH_ERR 64 pgs are stuck inactive for more than
300 seconds; 64 pgs
> stuck inactive; too few PGs per OSD (21 < min 30)
> pg 0.39 is stuck inactive for 652.741684, current
state creating, last
> acting []
> pg 0.38 is stuck inactive for 652.741688, current
state creating, last
> acting []
> pg 0.37 is stuck inactive for 652.741690, current
state creating, last
> acting []
> pg 0.36 is stuck inactive for 652.741692, current
state creating, last
> acting []
> pg 0.35 is stuck inactive for 652.741694, current
state creating, last
> acting []
> pg 0.34 is stuck inactive for 652.741696, current
state creating, last
> acting []
> pg 0.33 is stuck inactive for 652.741698, current
state creating, last
> acting []
> pg 0.32 is stuck inactive for 652.741701, current
state creating, last
> acting []
> pg 0.3 is stuck inactive for 652.741762, current
state creating, last
> acting []
> pg 0.2e is stuck inactive for 652.741715, current
state creating, last
> acting []
> pg 0.2d is stuck inactive for 652.741719, current
state creating, last
> acting []
> pg 0.2c is stuck inactive for 652.741721, current
state creating, last
> acting []
> pg 0.2b is stuck inactive for 652.741723, current
state creating, last
> acting []
> pg 0.2a is stuck inactive for 652.741725, current
state creating, last
> acting []
> pg 0.29 is stuck inactive for 652.741727, current
state creating, last
> 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread dE

On 10/14/2017 08:18 PM, David Turner wrote:


What are the ownership permissions on your osd folders? Clock skew 
cares about partial seconds.


It isn't the networking issue because your cluster isn't stuck 
peering. I'm not sure if the creating state happens in disk or in the 
cluster.



On Sat, Oct 14, 2017, 10:01 AM dE . > wrote:


I attached 1TB disks to each osd.

cluster 8161c90e-dbd2-4491-acf8-74449bef916a
 health HEALTH_ERR
    clock skew detected on mon.1, mon.2

    64 pgs are stuck inactive for more than 300 seconds
    64 pgs stuck inactive
    too few PGs per OSD (21 < min 30)
    Monitor clock skew detected
 monmap e1: 3 mons at
{0=10.247.103.139:8567/0,1=10.247.103.140:8567/0,2=10.247.103.141:8567/0

}
    election epoch 12, quorum 0,1,2 0,1,2
 osdmap e10: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
  pgmap v38: 64 pgs, 1 pools, 0 bytes data, 0 objects
    33963 MB used, 3037 GB / 3070 GB avail
  64 creating

I dont seem to have any clock skews --
or i in {139..141}; do ssh $i date +%s; done
1507989554
1507989554
1507989554


On Sat, Oct 14, 2017 at 6:41 PM, David Turner
> wrote:

What is the output of your `ceph status`?


On Fri, Oct 13, 2017, 10:09 PM dE > wrote:

On 10/14/2017 12:53 AM, David Turner wrote:

What does your environment look like?  Someone recently
on the mailing list had PGs stuck creating because of a
networking issue.

On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen
> wrote:

strange that no osd is acting for your pg's
can you show the output from
ceph osd tree


mvh
Ronny Aasen



On 13.10.2017 18:53, dE wrote:
> Hi,
>
>     I'm running ceph 10.2.5 on Debian (official
package).
>
> It cant seem to create any functional pools --
>
> ceph health detail
> HEALTH_ERR 64 pgs are stuck inactive for more than
300 seconds; 64 pgs
> stuck inactive; too few PGs per OSD (21 < min 30)
> pg 0.39 is stuck inactive for 652.741684, current
state creating, last
> acting []
> pg 0.38 is stuck inactive for 652.741688, current
state creating, last
> acting []
> pg 0.37 is stuck inactive for 652.741690, current
state creating, last
> acting []
> pg 0.36 is stuck inactive for 652.741692, current
state creating, last
> acting []
> pg 0.35 is stuck inactive for 652.741694, current
state creating, last
> acting []
> pg 0.34 is stuck inactive for 652.741696, current
state creating, last
> acting []
> pg 0.33 is stuck inactive for 652.741698, current
state creating, last
> acting []
> pg 0.32 is stuck inactive for 652.741701, current
state creating, last
> acting []
> pg 0.3 is stuck inactive for 652.741762, current
state creating, last
> acting []
> pg 0.2e is stuck inactive for 652.741715, current
state creating, last
> acting []
> pg 0.2d is stuck inactive for 652.741719, current
state creating, last
> acting []
> pg 0.2c is stuck inactive for 652.741721, current
state creating, last
> acting []
> pg 0.2b is stuck inactive for 652.741723, current
state creating, last
> acting []
> pg 0.2a is stuck inactive for 652.741725, current
state creating, last
> acting []
> pg 0.29 is stuck inactive for 652.741727, current
state creating, last
> acting []
> pg 0.28 is stuck inactive for 652.741730, current
state creating, last
> acting []
> pg 0.27 is stuck inactive for 652.741732, current
state creating, last
> acting []
> 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread David Turner
What is the output of your `ceph status`?

On Fri, Oct 13, 2017, 10:09 PM dE  wrote:

> On 10/14/2017 12:53 AM, David Turner wrote:
>
> What does your environment look like?  Someone recently on the mailing
> list had PGs stuck creating because of a networking issue.
>
> On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen 
> wrote:
>
>> strange that no osd is acting for your pg's
>> can you show the output from
>> ceph osd tree
>>
>>
>> mvh
>> Ronny Aasen
>>
>>
>>
>> On 13.10.2017 18:53, dE wrote:
>> > Hi,
>> >
>> > I'm running ceph 10.2.5 on Debian (official package).
>> >
>> > It cant seem to create any functional pools --
>> >
>> > ceph health detail
>> > HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs
>> > stuck inactive; too few PGs per OSD (21 < min 30)
>> > pg 0.39 is stuck inactive for 652.741684, current state creating, last
>> > acting []
>> > pg 0.38 is stuck inactive for 652.741688, current state creating, last
>> > acting []
>> > pg 0.37 is stuck inactive for 652.741690, current state creating, last
>> > acting []
>> > pg 0.36 is stuck inactive for 652.741692, current state creating, last
>> > acting []
>> > pg 0.35 is stuck inactive for 652.741694, current state creating, last
>> > acting []
>> > pg 0.34 is stuck inactive for 652.741696, current state creating, last
>> > acting []
>> > pg 0.33 is stuck inactive for 652.741698, current state creating, last
>> > acting []
>> > pg 0.32 is stuck inactive for 652.741701, current state creating, last
>> > acting []
>> > pg 0.3 is stuck inactive for 652.741762, current state creating, last
>> > acting []
>> > pg 0.2e is stuck inactive for 652.741715, current state creating, last
>> > acting []
>> > pg 0.2d is stuck inactive for 652.741719, current state creating, last
>> > acting []
>> > pg 0.2c is stuck inactive for 652.741721, current state creating, last
>> > acting []
>> > pg 0.2b is stuck inactive for 652.741723, current state creating, last
>> > acting []
>> > pg 0.2a is stuck inactive for 652.741725, current state creating, last
>> > acting []
>> > pg 0.29 is stuck inactive for 652.741727, current state creating, last
>> > acting []
>> > pg 0.28 is stuck inactive for 652.741730, current state creating, last
>> > acting []
>> > pg 0.27 is stuck inactive for 652.741732, current state creating, last
>> > acting []
>> > pg 0.26 is stuck inactive for 652.741734, current state creating, last
>> > acting []
>> > pg 0.3e is stuck inactive for 652.741707, current state creating, last
>> > acting []
>> > pg 0.f is stuck inactive for 652.741761, current state creating, last
>> > acting []
>> > pg 0.3f is stuck inactive for 652.741708, current state creating, last
>> > acting []
>> > pg 0.10 is stuck inactive for 652.741763, current state creating, last
>> > acting []
>> > pg 0.4 is stuck inactive for 652.741773, current state creating, last
>> > acting []
>> > pg 0.5 is stuck inactive for 652.741774, current state creating, last
>> > acting []
>> > pg 0.3a is stuck inactive for 652.741717, current state creating, last
>> > acting []
>> > pg 0.b is stuck inactive for 652.741771, current state creating, last
>> > acting []
>> > pg 0.c is stuck inactive for 652.741772, current state creating, last
>> > acting []
>> > pg 0.3b is stuck inactive for 652.741721, current state creating, last
>> > acting []
>> > pg 0.d is stuck inactive for 652.741774, current state creating, last
>> > acting []
>> > pg 0.3c is stuck inactive for 652.741722, current state creating, last
>> > acting []
>> > pg 0.e is stuck inactive for 652.741776, current state creating, last
>> > acting []
>> > pg 0.3d is stuck inactive for 652.741724, current state creating, last
>> > acting []
>> > pg 0.22 is stuck inactive for 652.741756, current state creating, last
>> > acting []
>> > pg 0.21 is stuck inactive for 652.741758, current state creating, last
>> > acting []
>> > pg 0.a is stuck inactive for 652.741783, current state creating, last
>> > acting []
>> > pg 0.20 is stuck inactive for 652.741761, current state creating, last
>> > acting []
>> > pg 0.9 is stuck inactive for 652.741787, current state creating, last
>> > acting []
>> > pg 0.1f is stuck inactive for 652.741764, current state creating, last
>> > acting []
>> > pg 0.8 is stuck inactive for 652.741790, current state creating, last
>> > acting []
>> > pg 0.7 is stuck inactive for 652.741792, current state creating, last
>> > acting []
>> > pg 0.6 is stuck inactive for 652.741794, current state creating, last
>> > acting []
>> > pg 0.1e is stuck inactive for 652.741770, current state creating, last
>> > acting []
>> > pg 0.1d is stuck inactive for 652.741772, current state creating, last
>> > acting []
>> > pg 0.1c is stuck inactive for 652.741774, current state creating, last
>> > acting []
>> > pg 0.1b is stuck inactive for 652.741777, current state creating, last
>> > acting []
>> > pg 0.1a is stuck inactive for 652.741784, current state creating, last
>> 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread dE

On 10/14/2017 12:53 AM, David Turner wrote:
What does your environment look like?  Someone recently on the mailing 
list had PGs stuck creating because of a networking issue.


On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen > wrote:


strange that no osd is acting for your pg's
can you show the output from
ceph osd tree


mvh
Ronny Aasen



On 13.10.2017 18:53, dE wrote:
> Hi,
>
>     I'm running ceph 10.2.5 on Debian (official package).
>
> It cant seem to create any functional pools --
>
> ceph health detail
> HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds;
64 pgs
> stuck inactive; too few PGs per OSD (21 < min 30)
> pg 0.39 is stuck inactive for 652.741684, current state
creating, last
> acting []
> pg 0.38 is stuck inactive for 652.741688, current state
creating, last
> acting []
> pg 0.37 is stuck inactive for 652.741690, current state
creating, last
> acting []
> pg 0.36 is stuck inactive for 652.741692, current state
creating, last
> acting []
> pg 0.35 is stuck inactive for 652.741694, current state
creating, last
> acting []
> pg 0.34 is stuck inactive for 652.741696, current state
creating, last
> acting []
> pg 0.33 is stuck inactive for 652.741698, current state
creating, last
> acting []
> pg 0.32 is stuck inactive for 652.741701, current state
creating, last
> acting []
> pg 0.3 is stuck inactive for 652.741762, current state creating,
last
> acting []
> pg 0.2e is stuck inactive for 652.741715, current state
creating, last
> acting []
> pg 0.2d is stuck inactive for 652.741719, current state
creating, last
> acting []
> pg 0.2c is stuck inactive for 652.741721, current state
creating, last
> acting []
> pg 0.2b is stuck inactive for 652.741723, current state
creating, last
> acting []
> pg 0.2a is stuck inactive for 652.741725, current state
creating, last
> acting []
> pg 0.29 is stuck inactive for 652.741727, current state
creating, last
> acting []
> pg 0.28 is stuck inactive for 652.741730, current state
creating, last
> acting []
> pg 0.27 is stuck inactive for 652.741732, current state
creating, last
> acting []
> pg 0.26 is stuck inactive for 652.741734, current state
creating, last
> acting []
> pg 0.3e is stuck inactive for 652.741707, current state
creating, last
> acting []
> pg 0.f is stuck inactive for 652.741761, current state creating,
last
> acting []
> pg 0.3f is stuck inactive for 652.741708, current state
creating, last
> acting []
> pg 0.10 is stuck inactive for 652.741763, current state
creating, last
> acting []
> pg 0.4 is stuck inactive for 652.741773, current state creating,
last
> acting []
> pg 0.5 is stuck inactive for 652.741774, current state creating,
last
> acting []
> pg 0.3a is stuck inactive for 652.741717, current state
creating, last
> acting []
> pg 0.b is stuck inactive for 652.741771, current state creating,
last
> acting []
> pg 0.c is stuck inactive for 652.741772, current state creating,
last
> acting []
> pg 0.3b is stuck inactive for 652.741721, current state
creating, last
> acting []
> pg 0.d is stuck inactive for 652.741774, current state creating,
last
> acting []
> pg 0.3c is stuck inactive for 652.741722, current state
creating, last
> acting []
> pg 0.e is stuck inactive for 652.741776, current state creating,
last
> acting []
> pg 0.3d is stuck inactive for 652.741724, current state
creating, last
> acting []
> pg 0.22 is stuck inactive for 652.741756, current state
creating, last
> acting []
> pg 0.21 is stuck inactive for 652.741758, current state
creating, last
> acting []
> pg 0.a is stuck inactive for 652.741783, current state creating,
last
> acting []
> pg 0.20 is stuck inactive for 652.741761, current state
creating, last
> acting []
> pg 0.9 is stuck inactive for 652.741787, current state creating,
last
> acting []
> pg 0.1f is stuck inactive for 652.741764, current state
creating, last
> acting []
> pg 0.8 is stuck inactive for 652.741790, current state creating,
last
> acting []
> pg 0.7 is stuck inactive for 652.741792, current state creating,
last
> acting []
> pg 0.6 is stuck inactive for 652.741794, current state creating,
last
> acting []
> pg 0.1e is stuck inactive for 652.741770, current state
creating, last
> acting []
> pg 0.1d is stuck inactive for 652.741772, current state
creating, last
> acting []
> pg 0.1c is stuck inactive for 652.741774, current state

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread David Turner
What does your environment look like?  Someone recently on the mailing list
had PGs stuck creating because of a networking issue.

On Fri, Oct 13, 2017 at 2:03 PM Ronny Aasen 
wrote:

> strange that no osd is acting for your pg's
> can you show the output from
> ceph osd tree
>
>
> mvh
> Ronny Aasen
>
>
>
> On 13.10.2017 18:53, dE wrote:
> > Hi,
> >
> > I'm running ceph 10.2.5 on Debian (official package).
> >
> > It cant seem to create any functional pools --
> >
> > ceph health detail
> > HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs
> > stuck inactive; too few PGs per OSD (21 < min 30)
> > pg 0.39 is stuck inactive for 652.741684, current state creating, last
> > acting []
> > pg 0.38 is stuck inactive for 652.741688, current state creating, last
> > acting []
> > pg 0.37 is stuck inactive for 652.741690, current state creating, last
> > acting []
> > pg 0.36 is stuck inactive for 652.741692, current state creating, last
> > acting []
> > pg 0.35 is stuck inactive for 652.741694, current state creating, last
> > acting []
> > pg 0.34 is stuck inactive for 652.741696, current state creating, last
> > acting []
> > pg 0.33 is stuck inactive for 652.741698, current state creating, last
> > acting []
> > pg 0.32 is stuck inactive for 652.741701, current state creating, last
> > acting []
> > pg 0.3 is stuck inactive for 652.741762, current state creating, last
> > acting []
> > pg 0.2e is stuck inactive for 652.741715, current state creating, last
> > acting []
> > pg 0.2d is stuck inactive for 652.741719, current state creating, last
> > acting []
> > pg 0.2c is stuck inactive for 652.741721, current state creating, last
> > acting []
> > pg 0.2b is stuck inactive for 652.741723, current state creating, last
> > acting []
> > pg 0.2a is stuck inactive for 652.741725, current state creating, last
> > acting []
> > pg 0.29 is stuck inactive for 652.741727, current state creating, last
> > acting []
> > pg 0.28 is stuck inactive for 652.741730, current state creating, last
> > acting []
> > pg 0.27 is stuck inactive for 652.741732, current state creating, last
> > acting []
> > pg 0.26 is stuck inactive for 652.741734, current state creating, last
> > acting []
> > pg 0.3e is stuck inactive for 652.741707, current state creating, last
> > acting []
> > pg 0.f is stuck inactive for 652.741761, current state creating, last
> > acting []
> > pg 0.3f is stuck inactive for 652.741708, current state creating, last
> > acting []
> > pg 0.10 is stuck inactive for 652.741763, current state creating, last
> > acting []
> > pg 0.4 is stuck inactive for 652.741773, current state creating, last
> > acting []
> > pg 0.5 is stuck inactive for 652.741774, current state creating, last
> > acting []
> > pg 0.3a is stuck inactive for 652.741717, current state creating, last
> > acting []
> > pg 0.b is stuck inactive for 652.741771, current state creating, last
> > acting []
> > pg 0.c is stuck inactive for 652.741772, current state creating, last
> > acting []
> > pg 0.3b is stuck inactive for 652.741721, current state creating, last
> > acting []
> > pg 0.d is stuck inactive for 652.741774, current state creating, last
> > acting []
> > pg 0.3c is stuck inactive for 652.741722, current state creating, last
> > acting []
> > pg 0.e is stuck inactive for 652.741776, current state creating, last
> > acting []
> > pg 0.3d is stuck inactive for 652.741724, current state creating, last
> > acting []
> > pg 0.22 is stuck inactive for 652.741756, current state creating, last
> > acting []
> > pg 0.21 is stuck inactive for 652.741758, current state creating, last
> > acting []
> > pg 0.a is stuck inactive for 652.741783, current state creating, last
> > acting []
> > pg 0.20 is stuck inactive for 652.741761, current state creating, last
> > acting []
> > pg 0.9 is stuck inactive for 652.741787, current state creating, last
> > acting []
> > pg 0.1f is stuck inactive for 652.741764, current state creating, last
> > acting []
> > pg 0.8 is stuck inactive for 652.741790, current state creating, last
> > acting []
> > pg 0.7 is stuck inactive for 652.741792, current state creating, last
> > acting []
> > pg 0.6 is stuck inactive for 652.741794, current state creating, last
> > acting []
> > pg 0.1e is stuck inactive for 652.741770, current state creating, last
> > acting []
> > pg 0.1d is stuck inactive for 652.741772, current state creating, last
> > acting []
> > pg 0.1c is stuck inactive for 652.741774, current state creating, last
> > acting []
> > pg 0.1b is stuck inactive for 652.741777, current state creating, last
> > acting []
> > pg 0.1a is stuck inactive for 652.741784, current state creating, last
> > acting []
> > pg 0.2 is stuck inactive for 652.741812, current state creating, last
> > acting []
> > pg 0.31 is stuck inactive for 652.741762, current state creating, last
> > acting []
> > pg 0.19 is stuck inactive for 652.741789, current state creating, last
> > acting []

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread Gerhard W. Recher
you specify a mon on 0.0.0.0 

my ceph.conf

[mon.2]
 host = pve03
 mon addr = 192.168.100.143:6789

[mon.3]
 host = pve04
 mon addr = 192.168.100.144:6789

[mon.0]
 host = pve01
 mon addr = 192.168.100.141:6789

[mon.1]
 host = pve02
 mon addr = 192.168.100.142:6789



Gerhard W. Recher

net4sec UG (haftungsbeschränkt)
Leitenweg 6
86929 Penzing

+49 171 4802507
Am 13.10.2017 um 20:01 schrieb Ronny Aasen:
> strange that no osd is acting for your pg's
> can you show the output from
> ceph osd tree
>
>
> mvh
> Ronny Aasen
>
>
>
> On 13.10.2017 18:53, dE wrote:
>> Hi,
>>
>>     I'm running ceph 10.2.5 on Debian (official package).
>>
>> It cant seem to create any functional pools --
>>
>> ceph health detail
>> HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64
>> pgs stuck inactive; too few PGs per OSD (21 < min 30)
>> pg 0.39 is stuck inactive for 652.741684, current state creating,
>> last acting []
>> pg 0.38 is stuck inactive for 652.741688, current state creating,
>> last acting []
>> pg 0.37 is stuck inactive for 652.741690, current state creating,
>> last acting []
>> pg 0.36 is stuck inactive for 652.741692, current state creating,
>> last acting []
>> pg 0.35 is stuck inactive for 652.741694, current state creating,
>> last acting []
>> pg 0.34 is stuck inactive for 652.741696, current state creating,
>> last acting []
>> pg 0.33 is stuck inactive for 652.741698, current state creating,
>> last acting []
>> pg 0.32 is stuck inactive for 652.741701, current state creating,
>> last acting []
>> pg 0.3 is stuck inactive for 652.741762, current state creating, last
>> acting []
>> pg 0.2e is stuck inactive for 652.741715, current state creating,
>> last acting []
>> pg 0.2d is stuck inactive for 652.741719, current state creating,
>> last acting []
>> pg 0.2c is stuck inactive for 652.741721, current state creating,
>> last acting []
>> pg 0.2b is stuck inactive for 652.741723, current state creating,
>> last acting []
>> pg 0.2a is stuck inactive for 652.741725, current state creating,
>> last acting []
>> pg 0.29 is stuck inactive for 652.741727, current state creating,
>> last acting []
>> pg 0.28 is stuck inactive for 652.741730, current state creating,
>> last acting []
>> pg 0.27 is stuck inactive for 652.741732, current state creating,
>> last acting []
>> pg 0.26 is stuck inactive for 652.741734, current state creating,
>> last acting []
>> pg 0.3e is stuck inactive for 652.741707, current state creating,
>> last acting []
>> pg 0.f is stuck inactive for 652.741761, current state creating, last
>> acting []
>> pg 0.3f is stuck inactive for 652.741708, current state creating,
>> last acting []
>> pg 0.10 is stuck inactive for 652.741763, current state creating,
>> last acting []
>> pg 0.4 is stuck inactive for 652.741773, current state creating, last
>> acting []
>> pg 0.5 is stuck inactive for 652.741774, current state creating, last
>> acting []
>> pg 0.3a is stuck inactive for 652.741717, current state creating,
>> last acting []
>> pg 0.b is stuck inactive for 652.741771, current state creating, last
>> acting []
>> pg 0.c is stuck inactive for 652.741772, current state creating, last
>> acting []
>> pg 0.3b is stuck inactive for 652.741721, current state creating,
>> last acting []
>> pg 0.d is stuck inactive for 652.741774, current state creating, last
>> acting []
>> pg 0.3c is stuck inactive for 652.741722, current state creating,
>> last acting []
>> pg 0.e is stuck inactive for 652.741776, current state creating, last
>> acting []
>> pg 0.3d is stuck inactive for 652.741724, current state creating,
>> last acting []
>> pg 0.22 is stuck inactive for 652.741756, current state creating,
>> last acting []
>> pg 0.21 is stuck inactive for 652.741758, current state creating,
>> last acting []
>> pg 0.a is stuck inactive for 652.741783, current state creating, last
>> acting []
>> pg 0.20 is stuck inactive for 652.741761, current state creating,
>> last acting []
>> pg 0.9 is stuck inactive for 652.741787, current state creating, last
>> acting []
>> pg 0.1f is stuck inactive for 652.741764, current state creating,
>> last acting []
>> pg 0.8 is stuck inactive for 652.741790, current state creating, last
>> acting []
>> pg 0.7 is stuck inactive for 652.741792, current state creating, last
>> acting []
>> pg 0.6 is stuck inactive for 652.741794, current state creating, last
>> acting []
>> pg 0.1e is stuck inactive for 652.741770, current state creating,
>> last acting []
>> pg 0.1d is stuck inactive for 652.741772, current state creating,
>> last acting []
>> pg 0.1c is stuck inactive for 652.741774, current state creating,
>> last acting []
>> pg 0.1b is stuck inactive for 652.741777, current state creating,
>> last acting []
>> pg 0.1a is stuck inactive for 652.741784, current state creating,
>> last acting []
>> pg 0.2 is stuck inactive for 652.741812, current state creating, last
>> acting []
>> pg 0.31 is 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread dE

On 10/13/2017 10:23 PM, dE wrote:

Hi,

    I'm running ceph 10.2.5 on Debian (official package).

It cant seem to create any functional pools --

ceph health detail
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs 
stuck inactive; too few PGs per OSD (21 < min 30)
pg 0.39 is stuck inactive for 652.741684, current state creating, last 
acting []
pg 0.38 is stuck inactive for 652.741688, current state creating, last 
acting []
pg 0.37 is stuck inactive for 652.741690, current state creating, last 
acting []
pg 0.36 is stuck inactive for 652.741692, current state creating, last 
acting []
pg 0.35 is stuck inactive for 652.741694, current state creating, last 
acting []
pg 0.34 is stuck inactive for 652.741696, current state creating, last 
acting []
pg 0.33 is stuck inactive for 652.741698, current state creating, last 
acting []
pg 0.32 is stuck inactive for 652.741701, current state creating, last 
acting []
pg 0.3 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.2e is stuck inactive for 652.741715, current state creating, last 
acting []
pg 0.2d is stuck inactive for 652.741719, current state creating, last 
acting []
pg 0.2c is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.2b is stuck inactive for 652.741723, current state creating, last 
acting []
pg 0.2a is stuck inactive for 652.741725, current state creating, last 
acting []
pg 0.29 is stuck inactive for 652.741727, current state creating, last 
acting []
pg 0.28 is stuck inactive for 652.741730, current state creating, last 
acting []
pg 0.27 is stuck inactive for 652.741732, current state creating, last 
acting []
pg 0.26 is stuck inactive for 652.741734, current state creating, last 
acting []
pg 0.3e is stuck inactive for 652.741707, current state creating, last 
acting []
pg 0.f is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.3f is stuck inactive for 652.741708, current state creating, last 
acting []
pg 0.10 is stuck inactive for 652.741763, current state creating, last 
acting []
pg 0.4 is stuck inactive for 652.741773, current state creating, last 
acting []
pg 0.5 is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3a is stuck inactive for 652.741717, current state creating, last 
acting []
pg 0.b is stuck inactive for 652.741771, current state creating, last 
acting []
pg 0.c is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.3b is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.d is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3c is stuck inactive for 652.741722, current state creating, last 
acting []
pg 0.e is stuck inactive for 652.741776, current state creating, last 
acting []
pg 0.3d is stuck inactive for 652.741724, current state creating, last 
acting []
pg 0.22 is stuck inactive for 652.741756, current state creating, last 
acting []
pg 0.21 is stuck inactive for 652.741758, current state creating, last 
acting []
pg 0.a is stuck inactive for 652.741783, current state creating, last 
acting []
pg 0.20 is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.9 is stuck inactive for 652.741787, current state creating, last 
acting []
pg 0.1f is stuck inactive for 652.741764, current state creating, last 
acting []
pg 0.8 is stuck inactive for 652.741790, current state creating, last 
acting []
pg 0.7 is stuck inactive for 652.741792, current state creating, last 
acting []
pg 0.6 is stuck inactive for 652.741794, current state creating, last 
acting []
pg 0.1e is stuck inactive for 652.741770, current state creating, last 
acting []
pg 0.1d is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.1c is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.1b is stuck inactive for 652.741777, current state creating, last 
acting []
pg 0.1a is stuck inactive for 652.741784, current state creating, last 
acting []
pg 0.2 is stuck inactive for 652.741812, current state creating, last 
acting []
pg 0.31 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.19 is stuck inactive for 652.741789, current state creating, last 
acting []
pg 0.11 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.18 is stuck inactive for 652.741793, current state creating, last 
acting []
pg 0.1 is stuck inactive for 652.741820, current state creating, last 
acting []
pg 0.30 is stuck inactive for 652.741769, current state creating, last 
acting []
pg 0.17 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.0 is stuck inactive for 652.741829, current state creating, last 
acting []
pg 0.2f is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.16 is stuck inactive for 652.741802, current state creating, last 
acting []
pg 0.12 is stuck inactive for 652.741807, current 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread dE

On 10/13/2017 10:23 PM, dE wrote:

Hi,

    I'm running ceph 10.2.5 on Debian (official package).

It cant seem to create any functional pools --

ceph health detail
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs 
stuck inactive; too few PGs per OSD (21 < min 30)
pg 0.39 is stuck inactive for 652.741684, current state creating, last 
acting []
pg 0.38 is stuck inactive for 652.741688, current state creating, last 
acting []
pg 0.37 is stuck inactive for 652.741690, current state creating, last 
acting []
pg 0.36 is stuck inactive for 652.741692, current state creating, last 
acting []
pg 0.35 is stuck inactive for 652.741694, current state creating, last 
acting []
pg 0.34 is stuck inactive for 652.741696, current state creating, last 
acting []
pg 0.33 is stuck inactive for 652.741698, current state creating, last 
acting []
pg 0.32 is stuck inactive for 652.741701, current state creating, last 
acting []
pg 0.3 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.2e is stuck inactive for 652.741715, current state creating, last 
acting []
pg 0.2d is stuck inactive for 652.741719, current state creating, last 
acting []
pg 0.2c is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.2b is stuck inactive for 652.741723, current state creating, last 
acting []
pg 0.2a is stuck inactive for 652.741725, current state creating, last 
acting []
pg 0.29 is stuck inactive for 652.741727, current state creating, last 
acting []
pg 0.28 is stuck inactive for 652.741730, current state creating, last 
acting []
pg 0.27 is stuck inactive for 652.741732, current state creating, last 
acting []
pg 0.26 is stuck inactive for 652.741734, current state creating, last 
acting []
pg 0.3e is stuck inactive for 652.741707, current state creating, last 
acting []
pg 0.f is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.3f is stuck inactive for 652.741708, current state creating, last 
acting []
pg 0.10 is stuck inactive for 652.741763, current state creating, last 
acting []
pg 0.4 is stuck inactive for 652.741773, current state creating, last 
acting []
pg 0.5 is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3a is stuck inactive for 652.741717, current state creating, last 
acting []
pg 0.b is stuck inactive for 652.741771, current state creating, last 
acting []
pg 0.c is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.3b is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.d is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3c is stuck inactive for 652.741722, current state creating, last 
acting []
pg 0.e is stuck inactive for 652.741776, current state creating, last 
acting []
pg 0.3d is stuck inactive for 652.741724, current state creating, last 
acting []
pg 0.22 is stuck inactive for 652.741756, current state creating, last 
acting []
pg 0.21 is stuck inactive for 652.741758, current state creating, last 
acting []
pg 0.a is stuck inactive for 652.741783, current state creating, last 
acting []
pg 0.20 is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.9 is stuck inactive for 652.741787, current state creating, last 
acting []
pg 0.1f is stuck inactive for 652.741764, current state creating, last 
acting []
pg 0.8 is stuck inactive for 652.741790, current state creating, last 
acting []
pg 0.7 is stuck inactive for 652.741792, current state creating, last 
acting []
pg 0.6 is stuck inactive for 652.741794, current state creating, last 
acting []
pg 0.1e is stuck inactive for 652.741770, current state creating, last 
acting []
pg 0.1d is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.1c is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.1b is stuck inactive for 652.741777, current state creating, last 
acting []
pg 0.1a is stuck inactive for 652.741784, current state creating, last 
acting []
pg 0.2 is stuck inactive for 652.741812, current state creating, last 
acting []
pg 0.31 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.19 is stuck inactive for 652.741789, current state creating, last 
acting []
pg 0.11 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.18 is stuck inactive for 652.741793, current state creating, last 
acting []
pg 0.1 is stuck inactive for 652.741820, current state creating, last 
acting []
pg 0.30 is stuck inactive for 652.741769, current state creating, last 
acting []
pg 0.17 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.0 is stuck inactive for 652.741829, current state creating, last 
acting []
pg 0.2f is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.16 is stuck inactive for 652.741802, current state creating, last 
acting []
pg 0.12 is stuck inactive for 652.741807, current 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread Ronny Aasen

strange that no osd is acting for your pg's
can you show the output from
ceph osd tree


mvh
Ronny Aasen



On 13.10.2017 18:53, dE wrote:

Hi,

    I'm running ceph 10.2.5 on Debian (official package).

It cant seem to create any functional pools --

ceph health detail
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs 
stuck inactive; too few PGs per OSD (21 < min 30)
pg 0.39 is stuck inactive for 652.741684, current state creating, last 
acting []
pg 0.38 is stuck inactive for 652.741688, current state creating, last 
acting []
pg 0.37 is stuck inactive for 652.741690, current state creating, last 
acting []
pg 0.36 is stuck inactive for 652.741692, current state creating, last 
acting []
pg 0.35 is stuck inactive for 652.741694, current state creating, last 
acting []
pg 0.34 is stuck inactive for 652.741696, current state creating, last 
acting []
pg 0.33 is stuck inactive for 652.741698, current state creating, last 
acting []
pg 0.32 is stuck inactive for 652.741701, current state creating, last 
acting []
pg 0.3 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.2e is stuck inactive for 652.741715, current state creating, last 
acting []
pg 0.2d is stuck inactive for 652.741719, current state creating, last 
acting []
pg 0.2c is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.2b is stuck inactive for 652.741723, current state creating, last 
acting []
pg 0.2a is stuck inactive for 652.741725, current state creating, last 
acting []
pg 0.29 is stuck inactive for 652.741727, current state creating, last 
acting []
pg 0.28 is stuck inactive for 652.741730, current state creating, last 
acting []
pg 0.27 is stuck inactive for 652.741732, current state creating, last 
acting []
pg 0.26 is stuck inactive for 652.741734, current state creating, last 
acting []
pg 0.3e is stuck inactive for 652.741707, current state creating, last 
acting []
pg 0.f is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.3f is stuck inactive for 652.741708, current state creating, last 
acting []
pg 0.10 is stuck inactive for 652.741763, current state creating, last 
acting []
pg 0.4 is stuck inactive for 652.741773, current state creating, last 
acting []
pg 0.5 is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3a is stuck inactive for 652.741717, current state creating, last 
acting []
pg 0.b is stuck inactive for 652.741771, current state creating, last 
acting []
pg 0.c is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.3b is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.d is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3c is stuck inactive for 652.741722, current state creating, last 
acting []
pg 0.e is stuck inactive for 652.741776, current state creating, last 
acting []
pg 0.3d is stuck inactive for 652.741724, current state creating, last 
acting []
pg 0.22 is stuck inactive for 652.741756, current state creating, last 
acting []
pg 0.21 is stuck inactive for 652.741758, current state creating, last 
acting []
pg 0.a is stuck inactive for 652.741783, current state creating, last 
acting []
pg 0.20 is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.9 is stuck inactive for 652.741787, current state creating, last 
acting []
pg 0.1f is stuck inactive for 652.741764, current state creating, last 
acting []
pg 0.8 is stuck inactive for 652.741790, current state creating, last 
acting []
pg 0.7 is stuck inactive for 652.741792, current state creating, last 
acting []
pg 0.6 is stuck inactive for 652.741794, current state creating, last 
acting []
pg 0.1e is stuck inactive for 652.741770, current state creating, last 
acting []
pg 0.1d is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.1c is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.1b is stuck inactive for 652.741777, current state creating, last 
acting []
pg 0.1a is stuck inactive for 652.741784, current state creating, last 
acting []
pg 0.2 is stuck inactive for 652.741812, current state creating, last 
acting []
pg 0.31 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.19 is stuck inactive for 652.741789, current state creating, last 
acting []
pg 0.11 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.18 is stuck inactive for 652.741793, current state creating, last 
acting []
pg 0.1 is stuck inactive for 652.741820, current state creating, last 
acting []
pg 0.30 is stuck inactive for 652.741769, current state creating, last 
acting []
pg 0.17 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.0 is stuck inactive for 652.741829, current state creating, last 
acting []
pg 0.2f is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.16 is stuck inactive 

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread Michael Kuriger
You may not have enough OSDs to satisfy the crush ruleset.  

 
Mike Kuriger 
Sr. Unix Systems Engineer
818-434-6195 
 

On 10/13/17, 9:53 AM, "ceph-users on behalf of dE" 
 wrote:

Hi,

 I'm running ceph 10.2.5 on Debian (official package).

It cant seem to create any functional pools --

ceph health detail
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs 
stuck inactive; too few PGs per OSD (21 < min 30)
pg 0.39 is stuck inactive for 652.741684, current state creating, last 
acting []
pg 0.38 is stuck inactive for 652.741688, current state creating, last 
acting []
pg 0.37 is stuck inactive for 652.741690, current state creating, last 
acting []
pg 0.36 is stuck inactive for 652.741692, current state creating, last 
acting []
pg 0.35 is stuck inactive for 652.741694, current state creating, last 
acting []
pg 0.34 is stuck inactive for 652.741696, current state creating, last 
acting []
pg 0.33 is stuck inactive for 652.741698, current state creating, last 
acting []
pg 0.32 is stuck inactive for 652.741701, current state creating, last 
acting []
pg 0.3 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.2e is stuck inactive for 652.741715, current state creating, last 
acting []
pg 0.2d is stuck inactive for 652.741719, current state creating, last 
acting []
pg 0.2c is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.2b is stuck inactive for 652.741723, current state creating, last 
acting []
pg 0.2a is stuck inactive for 652.741725, current state creating, last 
acting []
pg 0.29 is stuck inactive for 652.741727, current state creating, last 
acting []
pg 0.28 is stuck inactive for 652.741730, current state creating, last 
acting []
pg 0.27 is stuck inactive for 652.741732, current state creating, last 
acting []
pg 0.26 is stuck inactive for 652.741734, current state creating, last 
acting []
pg 0.3e is stuck inactive for 652.741707, current state creating, last 
acting []
pg 0.f is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.3f is stuck inactive for 652.741708, current state creating, last 
acting []
pg 0.10 is stuck inactive for 652.741763, current state creating, last 
acting []
pg 0.4 is stuck inactive for 652.741773, current state creating, last 
acting []
pg 0.5 is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3a is stuck inactive for 652.741717, current state creating, last 
acting []
pg 0.b is stuck inactive for 652.741771, current state creating, last 
acting []
pg 0.c is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.3b is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.d is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3c is stuck inactive for 652.741722, current state creating, last 
acting []
pg 0.e is stuck inactive for 652.741776, current state creating, last 
acting []
pg 0.3d is stuck inactive for 652.741724, current state creating, last 
acting []
pg 0.22 is stuck inactive for 652.741756, current state creating, last 
acting []
pg 0.21 is stuck inactive for 652.741758, current state creating, last 
acting []
pg 0.a is stuck inactive for 652.741783, current state creating, last 
acting []
pg 0.20 is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.9 is stuck inactive for 652.741787, current state creating, last 
acting []
pg 0.1f is stuck inactive for 652.741764, current state creating, last 
acting []
pg 0.8 is stuck inactive for 652.741790, current state creating, last 
acting []
pg 0.7 is stuck inactive for 652.741792, current state creating, last 
acting []
pg 0.6 is stuck inactive for 652.741794, current state creating, last 
acting []
pg 0.1e is stuck inactive for 652.741770, current state creating, last 
acting []
pg 0.1d is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.1c is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.1b is stuck inactive for 652.741777, current state creating, last 
acting []
pg 0.1a is stuck inactive for 652.741784, current state creating, last 
acting []
pg 0.2 is stuck inactive for 652.741812, current state creating, last 
acting []
pg 0.31 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.19 is stuck inactive for 652.741789, current state creating, last 
acting []
pg 0.11 is stuck inactive for 

[ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread dE

Hi,

    I'm running ceph 10.2.5 on Debian (official package).

It cant seem to create any functional pools --

ceph health detail
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs 
stuck inactive; too few PGs per OSD (21 < min 30)
pg 0.39 is stuck inactive for 652.741684, current state creating, last 
acting []
pg 0.38 is stuck inactive for 652.741688, current state creating, last 
acting []
pg 0.37 is stuck inactive for 652.741690, current state creating, last 
acting []
pg 0.36 is stuck inactive for 652.741692, current state creating, last 
acting []
pg 0.35 is stuck inactive for 652.741694, current state creating, last 
acting []
pg 0.34 is stuck inactive for 652.741696, current state creating, last 
acting []
pg 0.33 is stuck inactive for 652.741698, current state creating, last 
acting []
pg 0.32 is stuck inactive for 652.741701, current state creating, last 
acting []
pg 0.3 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.2e is stuck inactive for 652.741715, current state creating, last 
acting []
pg 0.2d is stuck inactive for 652.741719, current state creating, last 
acting []
pg 0.2c is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.2b is stuck inactive for 652.741723, current state creating, last 
acting []
pg 0.2a is stuck inactive for 652.741725, current state creating, last 
acting []
pg 0.29 is stuck inactive for 652.741727, current state creating, last 
acting []
pg 0.28 is stuck inactive for 652.741730, current state creating, last 
acting []
pg 0.27 is stuck inactive for 652.741732, current state creating, last 
acting []
pg 0.26 is stuck inactive for 652.741734, current state creating, last 
acting []
pg 0.3e is stuck inactive for 652.741707, current state creating, last 
acting []
pg 0.f is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.3f is stuck inactive for 652.741708, current state creating, last 
acting []
pg 0.10 is stuck inactive for 652.741763, current state creating, last 
acting []
pg 0.4 is stuck inactive for 652.741773, current state creating, last 
acting []
pg 0.5 is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3a is stuck inactive for 652.741717, current state creating, last 
acting []
pg 0.b is stuck inactive for 652.741771, current state creating, last 
acting []
pg 0.c is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.3b is stuck inactive for 652.741721, current state creating, last 
acting []
pg 0.d is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.3c is stuck inactive for 652.741722, current state creating, last 
acting []
pg 0.e is stuck inactive for 652.741776, current state creating, last 
acting []
pg 0.3d is stuck inactive for 652.741724, current state creating, last 
acting []
pg 0.22 is stuck inactive for 652.741756, current state creating, last 
acting []
pg 0.21 is stuck inactive for 652.741758, current state creating, last 
acting []
pg 0.a is stuck inactive for 652.741783, current state creating, last 
acting []
pg 0.20 is stuck inactive for 652.741761, current state creating, last 
acting []
pg 0.9 is stuck inactive for 652.741787, current state creating, last 
acting []
pg 0.1f is stuck inactive for 652.741764, current state creating, last 
acting []
pg 0.8 is stuck inactive for 652.741790, current state creating, last 
acting []
pg 0.7 is stuck inactive for 652.741792, current state creating, last 
acting []
pg 0.6 is stuck inactive for 652.741794, current state creating, last 
acting []
pg 0.1e is stuck inactive for 652.741770, current state creating, last 
acting []
pg 0.1d is stuck inactive for 652.741772, current state creating, last 
acting []
pg 0.1c is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.1b is stuck inactive for 652.741777, current state creating, last 
acting []
pg 0.1a is stuck inactive for 652.741784, current state creating, last 
acting []
pg 0.2 is stuck inactive for 652.741812, current state creating, last 
acting []
pg 0.31 is stuck inactive for 652.741762, current state creating, last 
acting []
pg 0.19 is stuck inactive for 652.741789, current state creating, last 
acting []
pg 0.11 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.18 is stuck inactive for 652.741793, current state creating, last 
acting []
pg 0.1 is stuck inactive for 652.741820, current state creating, last 
acting []
pg 0.30 is stuck inactive for 652.741769, current state creating, last 
acting []
pg 0.17 is stuck inactive for 652.741797, current state creating, last 
acting []
pg 0.0 is stuck inactive for 652.741829, current state creating, last 
acting []
pg 0.2f is stuck inactive for 652.741774, current state creating, last 
acting []
pg 0.16 is stuck inactive for 652.741802, current state creating, last 
acting []
pg 0.12 is stuck inactive for 652.741807, current state creating, last 
acting []
pg