This may be related to having your pool size = 1.  See
http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#placement-groups-never-get-clean

Try setting your data size to 2: "ceph osd pool set data size 2"


On Mon, Apr 22, 2013 at 7:07 AM, MinhTien MinhTien <
[email protected]> wrote:

> Dear all,
>
> - I use CentOS 6.3 up kernel 3.8.6-1.el6.elrepo.x86_64:  ceph storage
> (version 0.56.4), i set pool data (contains all data):
>
> ceph osd pool set data size 1
>
> - pool metadata:
>
> ceph osd pool set data size 2
>
> I have  osd, earch osd = 14TB (format ext4)
>
> I have 1 permanent error exists in the system.
>
> 2013-04-22 20:24:20.942457 mon.0 [INF] pgmap v313221: 640 pgs: 638
> active+clean, 2 *active+clean+scrubbing+deep*; 17915 GB data, 17947 GB
> used, 86469 GB / 107 TB avail
> 2013-04-22 20:24:12.256632 osd.1 [INF] 1.2e scrub ok
> 2013-04-22 20:24:23.348560 mon.0 [INF] pgmap v313222: 640 pgs: 638
> active+clean, 2 *active+clean+scrubbing+deep*; 17915 GB data, 17947 GB
> used, 86469 GB / 107 TB avail
> 2013-04-22 20:24:21.551528 osd.1 [INF] 1.3f scrub ok
> 2013-04-22 20:24:52.009562 mon.0 [INF] pgmap v313223: 640 pgs: 638
> active+clean, 2 *active+clean+scrubbing+deep*; 17915 GB data, 17947 GB
> used, 86469 GB / 107 TB avail
>
> This makes me not access some data.
>
> I tried to restart, use command "ceph pg repair " but error still exists
>
> I need some advice..
>
> Thanks
>
>
>
> --
> TienBM
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
John Wilkins
Senior Technical Writer
Intank
[email protected]
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to