[ceph-users] Ceph 0.56.4 - pgmap state: active+clean+scrubbing+deep

2013-04-22 Thread MinhTien MinhTien
Dear all, - I use CentOS 6.3 up kernel 3.8.6-1.el6.elrepo.x86_64: ceph storage (version 0.56.4), i set pool data (contains all data): ceph osd pool set data size 1 - pool metadata: ceph osd pool set data size 2 I have osd, earch osd = 14TB (format ext4) I have 1 permanent error exists in

Re: [ceph-users] Ceph 0.56.4 - pgmap state: active+clean+scrubbing+deep

2013-04-22 Thread Mike Lowe
If it says 'active+clean' then it is OK no mater what else may additionally have as a status. Deep scrubbing is just a normal background process that makes sure your data is consistent and shouldn't keep you from accessing it. Repair should only be done as a last resort, it will discard any

Re: [ceph-users] Ceph 0.56.4 - pgmap state: active+clean+scrubbing+deep

2013-04-22 Thread MinhTien MinhTien
Hi Mike Lowe, Thank you for your feedback. State Deep scrubbing occurs frequently in the system, it make user not access some data folder in ceph storage. How limited this situation? Thanks and Regards. TienBM On Mon, Apr 22, 2013 at 10:26 PM, Mike Lowe j.michael.l...@gmail.comwrote: If

Re: [ceph-users] Ceph 0.56.4 - pgmap state: active+clean+scrubbing+deep

2013-04-22 Thread John Wilkins
This may be related to having your pool size = 1. See http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#placement-groups-never-get-clean Try setting your data size to 2: ceph osd pool set data size 2 On Mon, Apr 22, 2013 at 7:07 AM, MinhTien MinhTien