Logan,

Thank you for the feedback.


Rhian Resnick

Assistant Director Middleware and HPC

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] <https://hpc.fau.edu/wp-content/uploads/2015/01/image.jpg>


________________________________
From: Logan Kuhn <[email protected]>
Sent: Tuesday, February 21, 2017 8:42 AM
To: Rhian Resnick
Cc: [email protected]
Subject: Re: [ceph-users] Cephfs with large numbers of files per directory

We have a very similar configuration at one point.

I was fairly new when we started to move away from it, but what happened to us 
is that anytime a directory needed to stat, backup, ls, rsync, etc.  It would 
take minutes to return and while it was waiting CPU load would spike due to 
iowait.  The difference between what you've said and what we did was that we 
used a gateway machine, the actual cluster never had any issues with it.  This 
was also on infernalis so things probably have changed in Jewel and Kraken.

Regards,
Logan

----- On Feb 21, 2017, at 7:37 AM, Rhian Resnick <[email protected]> wrote:

Good morning,


We are currently investigating using Ceph for a KVM farm, block storage and 
possibly file systems (cephfs with ceph-fuse, and ceph hadoop). Our cluster 
will be composed of 4 nodes, ~240 OSD's, and 4 monitors providing mon and mds 
as required.


What experience has the community had with large numbers of files in a single 
directory (500,000 - 5 million). We know that directory fragmentation will be 
required but are concerned about the stability of the implementation.


Your opinions and suggestions are welcome.


Thank you


Rhian Resnick

Assistant Director Middleware and HPC

Office of Information Technology


Florida Atlantic University

777 Glades Road, CM22, Rm 173B

Boca Raton, FL 33431

Phone 561.297.2647

Fax 561.297.0222

 [image] <https://hpc.fau.edu/wp-content/uploads/2015/01/image.jpg>

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to