On Thu, Apr 17, 2014 at 8:54 AM, Qing Zheng <[email protected]> wrote:
> -----Original Message-----
> From: Yan, Zheng [mailto:[email protected]]
> Sent: Wednesday, April 16, 2014 7:44 PM
> To: Qing Zheng
> Cc: [email protected]
> Subject: Re: [ceph-users] multi-mds and directory sharding
>
> On Thu, Apr 17, 2014 at 6:21 AM, Qing Zheng <[email protected]> wrote:
>> I seems that with kernel 3.14 and latest source code from github, we
>> still run into troubles when testing multi-mds and directory sharding.
>>
>
> what's problem you encountered?
>
> Hi Zheng,
>
> We are using mdtest to simulate workloads where multiple parallel client
> procs will keep inserting empty files into a single newly created directory.
> We are expecting CephFS to balance its metadata servers, and eventually all
> metadata
> servers will get a share of the directory.
>
> We currently found that CephFS was only able to run for the first 5-10 mins
> under such workload,
> and then stopped making progress -- the clients' "NewFile" calls would no
> longer return.
> From the client point of view, it was like the server was no longer
> processing any requests.
>
> Our test deployment had 32 osds, 8 mds, and 1 mon.
> CephFS was kernel mounted. Clients were collated with metadata servers.
>

It seems that you mount cephfs on the the same nodes that run MDS or
OSD. That can cause deadlock
(http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/6648).
Please try using separate node for cephfs mount.

Regards
Yan, Zheng



> Cheers,
>
> -- Qing Zheng
>
>
>
>
>
>> Are there any limits either in the max number of active metadata
>> servers that we could possibly run or in the number of directory
>> entries that we could set for Ceph to trigger a directory split?
>>
>> Is it okay to run 128 or more active metadata servers, for example?
>> Is it okay to let Ceph split directories once a directory has
>> accumulated
>> 200 entries?
>>
>> Cheers,
>>
>> -- Qing Zheng
>>
>> -----Original Message-----
>> From: Yan, Zheng [mailto:[email protected]]
>> Sent: Sunday, April 13, 2014 6:43 PM
>> To: Qing Zheng
>> Cc: [email protected]
>> Subject: Re: [ceph-users] multi-mds and directory sharding
>>
>> On Mon, Apr 14, 2014 at 2:54 AM, Qing Zheng <[email protected]> wrote:
>>> Hi -
>>>
>>> We are currently evaluating CephFS's metadata scalability and
>>> performance. One important feature of CephFS is its support for
>>> running multiple "active" mds instances and partitioning huge
>>> directories into small shards.
>>>
>>> We use mdtest to simulate workloads where multiple parallel client
>>> processes will keep inserting empty files into several large directories.
>>> We found that CephFS is only able to run for the first 5-10 mins, and
>>> then stop making progress -- the clients' "creat" call no longer return.
>>>
>>> We were using Ceph 0.72 and Ubuntu 12.10 with kernel 3.6.6.
>>> Our setup consisted of 8 osds, 3 mds, and 1 mon. All mds were active,
>>> instead of standby, and they were all configured to split directories
>>> once the directory size is greater than 2k. We kernel (not fuse)
>>> mounted CephFS on all 8 osd nodes.
>>
>> 3.6 kernel is too old for cephfs. please use kernel compiled from
>> testing branch https://github.com/ceph/ceph-client and the newest
>> development version of Ceph. There are large number of fixes for
>> directory fragment and multimds.
>>
>> Regards
>> Yan, Zheng
>>
>>>
>>> To test CephFS, we launched 64 client processes on 8 osd nodes (8
>>> procs per osd). Each client would create 1 directory and then insert
>>> 5k empty files into that directory. In total 64 directories and 320k
>>> files would be created. CephFS gave an avg throughput of 300~1k for
>>> the first 5 minutes, and then stopped making any progress.
>>>
>>> What might go wrong?
>>>
>>> If each client insert 200 files, instead of 5k, then CephFS could
>>> finish the workload with 1.5K ops/s. If each client insert 1k files,
>>> then ~500 ops/s If 2k files (the split threshold), then ~400 ops/s
>>>
>>> Are these numbers reasonable?
>>>
>>> -- Qing
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to