Hi,

I've just added a few more OSDs to my cluster. As it was expected the
system started rebalancing all the PGs to the new nodes.

pool .rgw.buckets id 24
  -221/-182 objects degraded (121.429%)
  recovery io 27213 kB/s, 53 objects/s
  client io 27434 B/s rd, 0 B/s wr, 66 op/s

the status outputs:
            988801/13249309 objects degraded (7.463%)
                  10 active+remapped+wait_backfill
                  13 active+remapped+backfilling
                 457 active+clean

I'm running ceph 0.80.5.

-- 

Luis Periquito

Unix Engineer

Ocado.com <http://www.ocado.com/>

Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
Hatfield, Herts AL10 9NE

-- 


Notice:  This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses.  

References to the “Ocado Group” are to Ocado Group plc (registered in 
England and Wales with number 7098618) and its subsidiary undertakings (as 
that expression is defined in the Companies Act 2006) from time to time.  
The registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
Hatfield Business Park, Hatfield, Herts. AL10 9NE.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to