​​
[sequential read]
readwrite=read
size=2g
directory=/mnt/mycephfs
ioengine=libaio
direct=1
blocksize=${BLOCKSIZE}
numjobs=1
iodepth=1
invalidate=1 # causes the kernel buffer and page cache to be invalidated
#nrfiles=1
[sequential write]
readwrite=write # randread randwrite
size=2g
directory=/mnt/mycephfs
ioengine=libaio
direct=1
blocksize=${BLOCKSIZE}
numjobs=1
iodepth=1
invalidate=1
[random read]
readwrite=randread
size=2g
directory=/mnt/mycephfs
ioengine=libaio
direct=1
blocksize=${BLOCKSIZE}
numjobs=1
iodepth=1
invalidate=1
[random write]
readwrite=randwrite
size=2g
directory=/mnt/mycephfs
ioengine=libaio
direct=1
blocksize=${BLOCKSIZE}
numjobs=1
iodepth=1
invalidate=1

On Sun, Aug 9, 2015 at 9:27 PM, Yan, Zheng <[email protected]> wrote:

>
> On Sun, Aug 9, 2015 at 8:57 AM, Hadi Montakhabi <[email protected]> wrote:
>
>> I am using fio.
>> I use the kernel module to Mount CephFS.
>>
>
> please send fio job file to us
>
>
>
>> On Aug 8, 2015 10:52 AM, "Ketor D" <[email protected]> wrote:
>>
>>> Hi Haidi,
>>>       Which bench tool do you use? And how you mount CephFS, ceph-fuse
>>> or kernel-cephfs?
>>>
>>> On Fri, Aug 7, 2015 at 11:50 PM, Hadi Montakhabi <[email protected]> wrote:
>>>
>>>> Hello Cephers,
>>>>
>>>> I am benchmarking CephFS. In one of my experiments, I change the object
>>>> size.
>>>> I start from 64kb. Everytime I do different block size reads and writes.
>>>> By increasing the object size to 64MB and increasing the block size to
>>>> 64MB, CephFS crashes (shown in the chart below). What I mean by crash is
>>>> when I do "ceph -s" or "ceph -w" it gets into constantly reporting me
>>>> reads, but it never finishes the operation (even after a few days!).
>>>> I have repeated this experiment for different underlying file systems
>>>> (xfs and btrfs), and the same thing happens in both cases.
>>>> What could be the reason for crashing CephFS? Is there a limit for
>>>> object size in CephFS?
>>>>
>>>> Thank you,
>>>> Hadi
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to