Hi,

On 14 Apr 2014 at 00:43:01, Yan, Zheng 
([email protected](mailto:[email protected])) wrote:  
> On Mon, Apr 14, 2014 at 2:54 AM, Qing Zheng wrote:  
> > Hi -  
> >  
> > We are currently evaluating CephFS's metadata scalability  
> > and performance. One important feature of CephFS is its support  
> > for running multiple "active" mds instances and partitioning huge  
> > directories into small shards.  
> >  
> > We use mdtest to simulate workloads where multiple parallel client  
> > processes will keep inserting empty files into several large directories.  
> > We found that CephFS is only able to run for the first 5-10 mins,  
> > and then stop making progress -- the clients' "creat" call no longer 
> > return.  
> >  
> > We were using Ceph 0.72 and Ubuntu 12.10 with kernel 3.6.6.  
> > Our setup consisted of 8 osds, 3 mds, and 1 mon. All mds were active,  
> > instead of  
> > standby, and they were all configured to split directories once the  
> > directory  
> > size is greater than 2k. We kernel (not fuse) mounted CephFS on all 8  
> > osd nodes.  
>  
> 3.6 kernel is too old for cephfs. please use kernel compiled from  
> testing branch https://github.com/ceph/ceph-client and the newest  
> development version of Ceph. There are large number of fixes for  
> directory fragment and multimds. 

Does the dev version of the MDS rely on any dev features in RADOS? ie. can we 
use a dumpling or emperor cluster with dev MDS?

And what is the status of fuse cephfs in the new dev version? Is that up to 
date with the latest kernel client?

Cheers, Dan

-- Dan van der Ster || Data & Storage Services || CERN IT Department --

> 
>  
> Regards  
> Yan, Zheng  
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to