Try using more OSDs.
I was encountering this scenario when my osds were equal to k+m
The errors went away when I used k+m+2
So in your case try with 8 or 10 osds.

On Thu, Feb 25, 2016 at 11:18 AM, Daleep Singh Bais <[email protected]>
wrote:

> hi All,
>
> Any help in this regard will be appreciated.
>
> Thanks..
> Daleep Singh Bais
>
>
> -------- Forwarded Message --------
> Subject: Erasure code Plugins
> Date: Fri, 19 Feb 2016 12:13:36 +0530
> From: Daleep Singh Bais <[email protected]> <[email protected]>
> To: ceph-users <[email protected]> <[email protected]>
>
> Hi All,
>
> I am experimenting with erasure profiles and would like to understand more
> about them. I created an LRC profile based on *
> <http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/>http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/
> <http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/>*
>
> The LRC profile created by me is
>
> *ceph osd erasure-code-profile get lrctest1*
> k=2
> l=2
> m=2
> plugin=lrc
> ruleset-failure-domain=host
> ruleset-locality=host
> ruleset-root=default
>
> However, when I create a pool based on this profile, I see a health
> warning in ceph -w ( 128 pgs stuck inactive and 128 pgs stuck unclean).
> This is the first pool in cluster.
>
> As i understand, m is parity bit and l will create additional parity bit
> for data bit k. Please correct me if I am wrong.
>
> Below is output of ceph -w
>
> health HEALTH_WARN
>             *128 pgs stuck inactive*
> *            128 pgs stuck unclean*
>      monmap e7: 1 mons at {node1=192.168.1.111:6789/0}
>             election epoch 101, quorum 0 node1
>      osdmap e928: *6 osds: 6 up, 6 in*
>             flags sortbitwise
>       pgmap v54114: 128 pgs, 1 pools, 0 bytes data, 0 objects
>             10182 MB used, 5567 GB / 5589 GB avail
>                  *128 creating*
>
>
> Any help or guidance in this regard is highly appreciated.
>
> Thanks,
>
> Daleep Singh Bais
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to