Hi!

Because distribution is computed with CRUSH rules algoritmically. So, as with 
any other
hash algorithms, the result will depend of the 'data' itself. In ceph, 'data' - 
is the object name.

Imagine, that you have a simple plain hashtable with 17 buckets. Bucket index 
is computed
by a simple 'modulo 17' algorithm. When you try to insert in the table values 
17,34,51, etc,
they will be stored in one bucket only, leaving all others empty. The same 
thing happens
with ceph, except that the 'hash function' (CRUSH map) are heavily parametrized 
by
osd tree topology, weights, etc.

Also you can set number of placement groups per pool - pg num and pgp num. The 
higher
their values, the more uniform will be distribution. But it is also rise the 
resources needed
for monitors.


Megov Igor
CIO, Yuterra

________________________________________
От: ceph-users <[email protected]> от имени Квапил, Андрей 
<[email protected]>
Отправлено: 7 сентября 2015 г. 14:44
Кому: [email protected]
Тема: Re: [ceph-users] Ceph cache-pool overflow

Hello. Somebody can answer a simple question?

Why ceph, with the equal weights and sizes of OSD, writes to them not
equally? - to some bit more, to other bit less...

Thanks.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to