Thanks Janne!!
I deleted all the pools. A few default rgw pools got auto-created, and the
rest I created manually. Now Ceph looks happy.
On Thu, Oct 31, 2019 at 11:18 PM Janne Johansson
wrote:
>
>
> Den tors 31 okt. 2019 kl 04:22 skrev soumya tr :
>
>> Thanks 潘东元 for the response.
>>
>> The
Den tors 31 okt. 2019 kl 04:22 skrev soumya tr :
> Thanks 潘东元 for the response.
>
> The creation of a new pool works, and all the PGs corresponding to that
> pool have active+clean state.
>
> When I initially set ceph 3 node cluster using juju charms (replication
> count per object was set to 3),
Thanks 潘东元 for the response.
The creation of a new pool works, and all the PGs corresponding to that
pool have active+clean state.
When I initially set ceph 3 node cluster using juju charms (replication
count per object was set to 3), there were issues with ceph-osd services.
So I had to delete
Thanks, Wido for the update.
Yeah, I have already tried a restart of ceph-mgr. But it didn't help.
On Wed, Oct 30, 2019 at 4:30 PM Wido den Hollander wrote:
>
>
> On 10/30/19 3:04 AM, soumya tr wrote:
> > Hi all,
> >
> > I have a 3 node ceph cluster setup using juju charms. ceph health shows
your pg acting set is empty,and cluster report i don't have pg that
indicate pg dost not have primary osd.
What are you cluster status when you are create poo?l
Wido den Hollander 于2019年10月30日周三 下午1:30写道:
>
>
>
> On 10/30/19 3:04 AM, soumya tr wrote:
> > Hi all,
> >
> > I have a 3 node ceph
On 10/30/19 3:04 AM, soumya tr wrote:
> Hi all,
>
> I have a 3 node ceph cluster setup using juju charms. ceph health shows
> having inactive pgs.
>
> ---
> /# ceph status
> cluster:
> id: 0e36956e-ef64-11e9-b472-00163e6e01e8
> health: HEALTH_WARN
>
Hi all,
I have a 3 node ceph cluster setup using juju charms. ceph health shows
having inactive pgs.
---
*# ceph status cluster:id: 0e36956e-ef64-11e9-b472-00163e6e01e8
health: HEALTH_WARNReduced data availability: 114 pgs inactive
services: