Re: CephFS Space Accounting and Quotas

2013-03-18 Thread Jim Schutt
On 03/15/2013 05:17 PM, Greg Farnum wrote: [Putting list back on cc] On Friday, March 15, 2013 at 4:11 PM, Jim Schutt wrote: On 03/15/2013 04:23 PM, Greg Farnum wrote: As I come back and look at these again, I'm not sure what the context for these logs is. Which test did they come from,

Re: CephFS Space Accounting and Quotas

2013-03-15 Thread Greg Farnum
[Putting list back on cc] On Friday, March 15, 2013 at 4:11 PM, Jim Schutt wrote: On 03/15/2013 04:23 PM, Greg Farnum wrote: As I come back and look at these again, I'm not sure what the context for these logs is. Which test did they come from, and which behavior (slow or not slow, etc)

Re: CephFS Space Accounting and Quotas

2013-03-12 Thread Jim Schutt
On 03/11/2013 02:40 PM, Jim Schutt wrote: If you want I can attempt to duplicate my memory of the first test I reported, writing the files today and doing the strace tomorrow (with timestamps, this time). Also, would it be helpful to write the files with minimal logging, in hopes of

Re: CephFS Space Accounting and Quotas

2013-03-11 Thread Jim Schutt
On 03/08/2013 07:05 PM, Greg Farnum wrote: On Friday, March 8, 2013 at 2:45 PM, Jim Schutt wrote: On 03/07/2013 08:15 AM, Jim Schutt wrote: On 03/06/2013 05:18 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote: [snip] Do you want the MDS log at 10

Re: CephFS Space Accounting and Quotas

2013-03-11 Thread Greg Farnum
On Monday, March 11, 2013 at 7:47 AM, Jim Schutt wrote: On 03/08/2013 07:05 PM, Greg Farnum wrote: On Friday, March 8, 2013 at 2:45 PM, Jim Schutt wrote: On 03/07/2013 08:15 AM, Jim Schutt wrote: On 03/06/2013 05:18 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 3:14 PM, Jim

Re: CephFS Space Accounting and Quotas

2013-03-11 Thread Jim Schutt
On 03/11/2013 09:48 AM, Greg Farnum wrote: On Monday, March 11, 2013 at 7:47 AM, Jim Schutt wrote: On 03/08/2013 07:05 PM, Greg Farnum wrote: On Friday, March 8, 2013 at 2:45 PM, Jim Schutt wrote: On 03/07/2013 08:15 AM, Jim Schutt wrote: On 03/06/2013 05:18 PM, Greg Farnum wrote: On

Re: CephFS Space Accounting and Quotas

2013-03-11 Thread Greg Farnum
On Monday, March 11, 2013 at 9:48 AM, Jim Schutt wrote: On 03/11/2013 09:48 AM, Greg Farnum wrote: On Monday, March 11, 2013 at 7:47 AM, Jim Schutt wrote: For this run, the MDS logging slowed it down enough to cause the client caps to occasionally go stale. I don't think it's the

Re: CephFS Space Accounting and Quotas

2013-03-11 Thread Jim Schutt
On 03/11/2013 10:57 AM, Greg Farnum wrote: On Monday, March 11, 2013 at 9:48 AM, Jim Schutt wrote: On 03/11/2013 09:48 AM, Greg Farnum wrote: On Monday, March 11, 2013 at 7:47 AM, Jim Schutt wrote: For this run, the MDS logging slowed it down enough to cause the client caps to occasionally

Re: CephFS Space Accounting and Quotas

2013-03-08 Thread Jim Schutt
On 03/07/2013 08:15 AM, Jim Schutt wrote: On 03/06/2013 05:18 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote: [snip] Do you want the MDS log at 10 or 20? More is better. ;) OK, thanks. I've sent some mds logs via private email... -- Jim -- To

Re: CephFS Space Accounting and Quotas

2013-03-08 Thread Greg Farnum
On Friday, March 8, 2013 at 2:45 PM, Jim Schutt wrote: On 03/07/2013 08:15 AM, Jim Schutt wrote: On 03/06/2013 05:18 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote: [snip] Do you want the MDS log at 10 or 20? More is better. ;)

Re: CephFS Space Accounting and Quotas

2013-03-07 Thread Jim Schutt
On 03/06/2013 05:18 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote: When I'm doing these stat operations the file system is otherwise idle. What's the cluster look like? This is just one active MDS and a couple hundred clients? 1 mds, 1 mon, 576 osds, 198

CephFS Space Accounting and Quotas (was: CephFS First product release discussion)

2013-03-06 Thread Greg Farnum
On Wednesday, March 6, 2013 at 11:07 AM, Jim Schutt wrote: On 03/05/2013 12:33 PM, Sage Weil wrote: Running 'du' on each directory would be much faster with Ceph since it accounts tracks the subdirectories and shows their total size with an 'ls -al'. Environments with

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Jim Schutt
On 03/06/2013 12:13 PM, Greg Farnum wrote: On Wednesday, March 6, 2013 at 11:07 AM, Jim Schutt wrote: On 03/05/2013 12:33 PM, Sage Weil wrote: Running 'du' on each directory would be much faster with Ceph since it accounts tracks the subdirectories and shows their total size with an 'ls -al'.

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Greg Farnum
On Wednesday, March 6, 2013 at 11:58 AM, Jim Schutt wrote: On 03/06/2013 12:13 PM, Greg Farnum wrote: Check out the directory sizes with ls -l or whatever — those numbers are semantically meaningful! :) That is just exceptionally cool! Unfortunately we can't (currently) use

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Jim Schutt
On 03/06/2013 01:21 PM, Greg Farnum wrote: Also, this issue of stat on files created on other clients seems like it's going to be problematic for many interactions our users will have with the files created by their parallel compute jobs - any suggestion on how to avoid or fix it?

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Greg Farnum
On Wednesday, March 6, 2013 at 1:28 PM, Jim Schutt wrote: On 03/06/2013 01:21 PM, Greg Farnum wrote: Also, this issue of stat on files created on other clients seems like it's going to be problematic for many interactions our users will have with the files created by their parallel

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Sage Weil
On Wed, 6 Mar 2013, Greg Farnum wrote: 'ls -lh dir' seems to be just the thing if you already know dir. And it's perfectly suitable for our use case of not scheduling new jobs for users consuming too much space. I was thinking I might need to find a subtree where all the

Re: CephFS Space Accounting and Quotas

2013-03-06 Thread Greg Farnum
On Wednesday, March 6, 2013 at 3:14 PM, Jim Schutt wrote: When I'm doing these stat operations the file system is otherwise idle. What's the cluster look like? This is just one active MDS and a couple hundred clients? What is happening is that once one of these slow stat operations on a