>In order to reduce the enlarge impact, we want to change the default size of >the object from 4M to 32k. > >We know that will increase the number of the objects of one OSD and make >remove process become longer. > >Hmm, here i want to ask your guys is there any other potential problems will >32k size have? If no obvious problem, will could dive into >it and do more test on it.
I assume the objects on the OSDs filesystem will become 32k when you do this. So if you have 1TB of data on one OSD you will have 31 million files == 31 million inodes This is excluding the directory structure which also might be significant. If you have 10 OSDs on a server you will easily hit 310 million inodes. You will need a LOT of memory to make sure the inodes are cached but even then looking up the inode might add significant latency. My guess is it will be fast in the beginning but it will grind to an hold when the cluster gets fuller due to inodes no longer being in memory. Also this does not take in any other bottlenecks you might hit in ceph which other users can probably answer better. Cheers, Robert van Leeuwen