> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski 
> <[email protected]>:
> 
> 
> Hi,
> 
> we are using ceph and radosGW to store images (~300kb each) in S3,
> when in comes to deep-scrubbing we facing task timeouts (> 30s ...)
> 
> my questions is:
> 
> in case of that amount of objects/files is it better to calculate the
> PGs on a object-bases instant of the volume size? and how it should be
> done?
> 

Do you have bucket sharding enabled?

And how many objects do you have in a single bucket?

If sharding is not enabled for the bucket index you might have large RADOS 
objects with bucket indexes which are hard to scrub.

Wido

> thanks
> Ansgar
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to