Hi, cephers, Sage and Haomai

Recently we stuck of the performance down problem when recoverying. The scene 
is simple:
1. run fio with rand write(bs=4k)
2. stop one osd; sleep 10; start the osd
3. the IOPS drop from 6K to about 200

We now know the SSD which that osd on is the bottleneck when recovery. After 
read the code, we find the IO of that 
SSD come from two ways:
1. normal recovery IO
2. user IO but in the missing list, need to recovery the 4M object first.

So our first step is limit the recovery IO to slow down the stress of that SSD. 
That helps in some scene, but not this one.


We have 36 OSD with 3 replicas, so when one osd down, about 1/12 objects will 
be in degraded state.
When we run fio with 4k randwrite, about 1/12 io will stuck and need to 
recovery the 4M object first.
That really enlarge the stress the that SSD.

In order to reduce the enlarge impact, we want to change the default size of 
the object from 4M to 32k.

We know that will increase the number of the objects of one OSD and make remove 
process become longer.

Hmm, here i want to ask your guys is there any other potential problems will 
32k size have? If no obvious problem, will could dive into
it and do more test on it.

Many thanks!
                                
--------------
hzwulibin
2015-12-23
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to