Can those hints be disabled somehow? I was battling XFS preallocation the other 
day, and the mount option didn't make any difference - maybe because those 
hints have precedence (which could mean they aren't working as they should), 
maybe not.

In particular, when you fallocate a file, some number of blocks will be 
reserved without actually allocating the blocks. When you then dirty a block 
with write and flush, metadata needs to be written (in journal, synchronously) 
<- this is slow with all drives, and extremely slow with sh*tty drives (doing 
benchmark on such a file will yield just 100 write IOPs, but when you allocate 
the file previously with dd if=/dev/zero it will have 6000 IOPs!) - and there 
doesn't seem to be a way to disable it in XFS. Not sure if hints should help or 
if they are actually causing the problem (I am not clear on whether they 
preallocate metadata blocks or just block count). Ext4 does the same thing.

Might be worth looking into?

Jan


> On 31 Oct 2015, at 19:36, Gregory Farnum <gfar...@redhat.com> wrote:
> 
> On Friday, October 30, 2015, mad Engineer <themadengin...@gmail.com 
> <mailto:themadengin...@gmail.com>> wrote:
> i am learning ceph,block storage and read that each object size is 4 Mb.I am 
> not clear about the concepts of object storage still what will happen if the 
> actual size of data written to block is less than 4 Mb lets say 1 Mb.Will it 
> still create object with 4 mb size and keep the rest of the space free and 
> unusable?
> 
> No, it will only take up as much space as you write (plus some metadata). 
> Although I think RBD passes down io hints suggesting the object's final size 
> will be 4MB so that the underlying storage (eg xfs) can prevent fragmentation.
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to