Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-26 Thread Jim Meyering
Jim Meyering [EMAIL PROTECTED] wrote:
 Bruno Haible [EMAIL PROTECTED] wrote:
 Jim Meyering wrote:
 +   if (nitems  _FTS_INODE_SORT_DIR_ENTRIES_THRESHOLD
 +!sp-fts_compar
 +dirent_inode_sort_may_be_useful (sp)) {
 +   sp-fts_compar = fts_compare_ino;
 +   head = fts_sort (sp, head, nitems);
 +   sp-fts_compar = NULL;
 +   }

 This uses fts_sort, which uses qsort, which is O(n^2) in the worst case.
 (Even the implementation in glibc can be O(n^2), if it guesses that mergesort
 takes too much memory.)

 Couldn't this be avoided by using Paul's mpsort module, and changing fts_sort
 to allocate 50% more room, as scratch space for mpsort?

 Good point!
 That would make a fine improvement, as a separate patch.

 The new use of qsort in the code I've just proposed for
 rm would benefit from the same treatment.

On second thought, I'm not so sure.
glibc's qsort function does revert to quicksort, but
only upon failure to malloc a buffer for indirect sorting.
Yet each of fts.c and remove.c is already sorting an array
of pointers, so there's no gain in the indirection.

And even if it does resort to using quicksort, the odds of
non-contrived input (the inode numbers) provoking O(N^2)
performance seem vanishingly small.

However, I note that fts_sort would benefit from a rewrite
to make it use mpsort rather than manually setting up its
array of pointers.




Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-26 Thread Jim Meyering
Jim Meyering [EMAIL PROTECTED] wrote:
 Here is a patch that makes it so tools using fts,
 like chmod, chown, chgrp, chcon, du, and find are no
 longer susceptible to an O(n^2) performance penalty when
 processing very large directory-entry counts (as in millions).
 I first noticed the problem on ext3 and ext4 file systems,
 but the patch also improves performance on reiserfs and xfs,
 but not on tmpfs.

I've pushed that with a fix for the comparison function
problem that Ralf spotted:

  
http://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=commitdiff;h=2f2978ede97205c49d3e568ccffa5a04fb53326b

then spotted a typo, and pushed the correction:

  
http://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=commit;h=1d569ca4e7e6147793e6e6510e5a36a4139b2f31




Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-26 Thread Matthew Woehlke

Bruno Haible wrote:

Jim Meyering wrote:

+   if (nitems  _FTS_INODE_SORT_DIR_ENTRIES_THRESHOLD
+!sp-fts_compar
+dirent_inode_sort_may_be_useful (sp)) {
+   sp-fts_compar = fts_compare_ino;
+   head = fts_sort (sp, head, nitems);
+   sp-fts_compar = NULL;
+   }


This uses fts_sort, which uses qsort, which is O(n^2) in the worst case.
(Even the implementation in glibc can be O(n^2), if it guesses that mergesort
takes too much memory.)

Couldn't this be avoided by using Paul's mpsort module, and changing fts_sort
to allocate 50% more room, as scratch space for mpsort?


You're talking about inode numbers (i.e. fixed-sized keys), yes? Could 
this be a case for a radix sort?


--
Matthew
Please do not quote my e-mail address unobfuscated in message bodies.
--
Who wants to sing? -- Orcs (Warcraft II)





Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-25 Thread Ralf Wildenhues
Hi Jim,

* Jim Meyering wrote on Thu, Sep 25, 2008 at 06:16:58PM CEST:
 --- a/lib/fts.c
 +++ b/lib/fts.c

 +/* A comparison function to sort on increasing inode number.
 +   For some file system types, sorting either way makes a huge
 +   performance difference for a directory with very many entries,
 +   but sorting on increasing values is slightly better than sorting
 +   on decreasing values.  The difference is in the 5% range.  */
 +static int
 +fts_compare_ino (struct _ftsent const **a, struct _ftsent const **b)
 +{
 +  int diff = (b[0]-fts_statp-st_ino
 +   - a[0]-fts_statp-st_ino);

This can over- resp. underflow and then give the wrong sign, no?

 +  return diff;
 +}

Cheers,
Ralf




Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-25 Thread Jim Meyering
Ralf Wildenhues [EMAIL PROTECTED] wrote:
 * Jim Meyering wrote on Thu, Sep 25, 2008 at 06:16:58PM CEST:
 --- a/lib/fts.c
 +++ b/lib/fts.c

 +/* A comparison function to sort on increasing inode number.
 +   For some file system types, sorting either way makes a huge
 +   performance difference for a directory with very many entries,
 +   but sorting on increasing values is slightly better than sorting
 +   on decreasing values.  The difference is in the 5% range.  */
 +static int
 +fts_compare_ino (struct _ftsent const **a, struct _ftsent const **b)
 +{
 +  int diff = (b[0]-fts_statp-st_ino
 +  - a[0]-fts_statp-st_ino);

 This can over- resp. underflow and then give the wrong sign, no?

Hi Ralf,

Thanks for the quick feedback.
You're right, of course, and I'll fold in this change:

diff --git a/lib/fts.c b/lib/fts.c
index cd14ec4..3cf49fa 100644
--- a/lib/fts.c
+++ b/lib/fts.c
@@ -981,9 +981,8 @@ static bool dirent_inode_sort_may_be_useful (FTS const *sp) 
{ return true; }
 static int
 fts_compare_ino (struct _ftsent const **a, struct _ftsent const **b)
 {
-  int diff = (b[0]-fts_statp-st_ino
- - a[0]-fts_statp-st_ino);
-  return diff;
+  return (a[0]-fts_statp-st_ino  b[0]-fts_statp-st_ino ? 1
+ : b[0]-fts_statp-st_ino  a[0]-fts_statp-st_ino ? -1 : 0);
 }

 /*




Re: fts: avoid O(n^2) ext3/ext4 performance hit on million+-entry dirs

2008-09-25 Thread Bruno Haible
Jim Meyering wrote:
 + if (nitems  _FTS_INODE_SORT_DIR_ENTRIES_THRESHOLD
 +  !sp-fts_compar
 +  dirent_inode_sort_may_be_useful (sp)) {
 + sp-fts_compar = fts_compare_ino;
 + head = fts_sort (sp, head, nitems);
 + sp-fts_compar = NULL;
 + }

This uses fts_sort, which uses qsort, which is O(n^2) in the worst case.
(Even the implementation in glibc can be O(n^2), if it guesses that mergesort
takes too much memory.)

Couldn't this be avoided by using Paul's mpsort module, and changing fts_sort
to allocate 50% more room, as scratch space for mpsort?

Bruno