Thanks a lot for the reply. To eliminate issue of root not being present
and duplicate entries
in crush map I have updated my crush map. Now I have default root and I
have crush hierarchy
without duplicate entries.
I have now created one pool local to host "ip-10-0-9-233" while other pool
local to
f you have some production data, do a backup first)
Étienne
From: ceph-users on behalf of Mandar Naik
Sent: Wednesday, August 16, 2017 09:39
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32%
of
Not going through the obvious of that crush map is just not looking
correct or even sane... or that the policy itself doesn't sound very
sane - but I'm sure you'll understand the caveats and issues it may
present...
what's most probably happening is that a (or several) pool is using
those same OSD
Hi,
I just wanted to give a friendly reminder for this issue. I would
appreciate if someone
can help me out here. Also, please do let me know in case some more
information is
required here.
On Thu, Aug 10, 2017 at 2:41 PM, Mandar Naik wrote:
> Hi Peter,
> Thanks a lot for the reply. Please find
Hi Peter,
Thanks a lot for the reply. Please find 'ceph osd df' output here -
# ceph osd df
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
2 0.04399 1.0 46056M 35576k 46021M 0.08 0.00 0
1 0.04399 1.0 46056M 40148k 46017M 0.09 0.00 384
0 0.04399 1.0 46056M 43851M 220
I think a `ceph osd df` would be useful.
And how did you set up such a cluster? I don't see a root, and you have
each osd in there more than once...is that even possible?
On 08/10/17 08:46, Mandar Naik wrote:
> *
>
> Hi,
>
> I am evaluating ceph cluster for a solution where ceph could be used
>
*Hi,I am evaluating ceph cluster for a solution where ceph could be used
for provisioningpools which could be either stored local to a node or
replicated across a cluster. This way ceph could be used as single point
of solution for writing both local as well as replicateddata. Local storage
helps