Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread Jan Engelhardt

On Feb 20 2008 09:44, David Rees wrote:
>On Wed, Feb 20, 2008 at 2:57 AM, Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>>  But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.
>
>Huh? The version of tar on my Fedora 8 desktop (tar-1.17-7) does. Just
>add the --xattrs option (which turns on --acls and --selinux).

Yeah they probably whipped it up with some patches.

$ tar --xattrs
tar: unrecognized option `--xattrs'
Try `tar --help' or `tar --usage' for more information.
$ tar --acl
tar: unrecognized option `--acl'
Try `tar --help' or `tar --usage' for more information.
$ rpm -q tar
tar-1.17-21
(Not everything that runs rpm is a fedorahat, though)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread David Rees
On Wed, Feb 20, 2008 at 2:57 AM, Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>  But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.

Huh? The version of tar on my Fedora 8 desktop (tar-1.17-7) does. Just
add the --xattrs option (which turns on --acls and --selinux).

-Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread Jan Engelhardt

On Feb 18 2008 10:35, Theodore Tso wrote:
>On Mon, Feb 18, 2008 at 04:57:25PM +0100, Andi Kleen wrote:
>> > Use cp
>> > or a tar pipeline to move the files.
>> 
>> Are you sure cp handles hardlinks correctly? I know tar does,
>> but I have my doubts about cp.
>
>I *think* GNU cp does the right thing with --preserve=links.  I'm not
>100% sure, though --- like you, probably, I always use tar for moving
>or copying directory hierarchies.

But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread Jan Engelhardt

On Feb 18 2008 10:35, Theodore Tso wrote:
On Mon, Feb 18, 2008 at 04:57:25PM +0100, Andi Kleen wrote:
  Use cp
  or a tar pipeline to move the files.
 
 Are you sure cp handles hardlinks correctly? I know tar does,
 but I have my doubts about cp.

I *think* GNU cp does the right thing with --preserve=links.  I'm not
100% sure, though --- like you, probably, I always use tar for moving
or copying directory hierarchies.

But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread David Rees
On Wed, Feb 20, 2008 at 2:57 AM, Jan Engelhardt [EMAIL PROTECTED] wrote:
  But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.

Huh? The version of tar on my Fedora 8 desktop (tar-1.17-7) does. Just
add the --xattrs option (which turns on --acls and --selinux).

-Dave
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-20 Thread Jan Engelhardt

On Feb 20 2008 09:44, David Rees wrote:
On Wed, Feb 20, 2008 at 2:57 AM, Jan Engelhardt [EMAIL PROTECTED] wrote:
  But GNU tar does not handle acls and xattrs. So back to rsync/cp/mv.

Huh? The version of tar on my Fedora 8 desktop (tar-1.17-7) does. Just
add the --xattrs option (which turns on --acls and --selinux).

Yeah they probably whipped it up with some patches.

$ tar --xattrs
tar: unrecognized option `--xattrs'
Try `tar --help' or `tar --usage' for more information.
$ tar --acl
tar: unrecognized option `--acl'
Try `tar --help' or `tar --usage' for more information.
$ rpm -q tar
tar-1.17-21
(Not everything that runs rpm is a fedorahat, though)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Paulo Marques wrote:

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with "rm -r" on largish directories.


 From looking at the code, I think I've found at least one bug in opendir:
...
dnew = realloc(dirstruct->dp,
dirstruct->max * sizeof(struct dir_s));

...

Shouldn't this be: "...*sizeof(struct dirent_s));"?

..

Yeah, that's one bug.
Another is that ->fd is frequently left uninitialized, yet later used.

Fixing those didn't change the null pointer deaths, though.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Paulo Marques

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with "rm -r" on largish directories.


From looking at the code, I think I've found at least one bug in opendir:
...
			dnew = realloc(dirstruct->dp, 
   dirstruct->max * sizeof(struct dir_s));

...

Shouldn't this be: "...*sizeof(struct dirent_s));"?

--
Paulo Marques - www.grupopie.com

"Nostalgia isn't what it used to be."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with "rm -r" on largish directories.

I added a bit more debugging to it, and it always craps out like this:
opendir dir=0x805ad10((nil))
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64 dir=0x805ad10 pos=1/289/290
Readdir64 dir=0x805ad10 pos=2/289/290
Readdir64 dir=0x805ad10 pos=3/289/290
Readdir64 dir=0x805ad10 pos=4/289/290
...
Readdir64 dir=0x805ad10 pos=287/289/290
Readdir64 dir=0x805ad10 pos=288/289/290
Readdir64 dir=0x805ad10 pos=289/289/290
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64: dirstruct->dp=(nil)
Readdir64: ds=(nil)
Segmentation fault (core dumped)
   
Always.  The "rm -r" loops over the directory, as show above,

and then tries to re-access entry 0 somehow, at which point
it discovers that it's been NULLed out.

Which is weird, because the local seekdir() was never called,
and the code never zeroed/freed that memory itself
(I've got printfs in there..).

Nulling out the qsort has no effect, and smaller/larger
ALLOC_STEPSIZE values don't seem to matter.

But.. when the entire tree is in RAM (freshly unpacked .tar),
it seems to have no problems with it.  As opposed to an uncached tree.

..

I take back that last point -- it also fails even when the tree *is* cached.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with "rm -r" on largish directories.

I added a bit more debugging to it, and it always craps out like this:

opendir dir=0x805ad10((nil))

Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64 dir=0x805ad10 pos=1/289/290
Readdir64 dir=0x805ad10 pos=2/289/290
Readdir64 dir=0x805ad10 pos=3/289/290
Readdir64 dir=0x805ad10 pos=4/289/290
...
Readdir64 dir=0x805ad10 pos=287/289/290
Readdir64 dir=0x805ad10 pos=288/289/290
Readdir64 dir=0x805ad10 pos=289/289/290
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64: dirstruct->dp=(nil)
Readdir64: ds=(nil)
Segmentation fault (core dumped)



Always.  The "rm -r" loops over the directory, as show above,
and then tries to re-access entry 0 somehow, at which point
it discovers that it's been NULLed out.

Which is weird, because the local seekdir() was never called,
and the code never zeroed/freed that memory itself
(I've got printfs in there..).

Nulling out the qsort has no effect, and smaller/larger
ALLOC_STEPSIZE values don't seem to matter.

But.. when the entire tree is in RAM (freshly unpacked .tar),
it seems to have no problems with it.  As opposed to an uncached tree.

Peculiar.. I wonder where the bug is ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Chris Mason
On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
> Chris Mason schrieb:
> > On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
> >> Theodore Tso schrieb:
> >>
> >> (...)
> >>
> >>> The following ld_preload can help in some cases.  Mutt has this hack
> >>> encoded in for maildir directories, which helps.
> >>
> >> It doesn't work very reliable for me.
> >>
> >> For some reason, it hangs for me sometimes (doesn't remove any files, rm
> >> -rf just stalls), or segfaults.
> >
> > You can go the low-tech route (assuming your file names don't have spaces
> > in them)
> >
> > find . -printf "%i %p\n" | sort -n | awk '{print $2}' | xargs rm
>
> Why should it make a difference?

It does something similar to Ted's ld preload, sorting the results from 
readdir by inode number before using them.  You will still seek quite a lot 
between the directory entries, but operations on the files themselves will go 
in a much more optimal order.  It might help.

>
> Does "find" find filenames/paths faster than "rm -r"?
>
> Or is "find once/remove once" faster than "find files/rm files/find
> files/rm files/...", which I suppose "rm -r" does?

rm -r does removes things in the order that readdir returns.  In your hard 
linked tree (on almost any FS), this will be very random.  The sorting is 
probably the best you can do from userland to optimize the ordering.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Tomasz Chmielewski

Chris Mason schrieb:

On Tuesday 19 February 2008, Tomasz Chmielewski wrote:

Theodore Tso schrieb:

(...)


The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

It doesn't work very reliable for me.

For some reason, it hangs for me sometimes (doesn't remove any files, rm
-rf just stalls), or segfaults.


You can go the low-tech route (assuming your file names don't have spaces in 
them)


find . -printf "%i %p\n" | sort -n | awk '{print $2}' | xargs rm


Why should it make a difference?

Does "find" find filenames/paths faster than "rm -r"?

Or is "find once/remove once" faster than "find files/rm files/find 
files/rm files/...", which I suppose "rm -r" does?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Chris Mason
On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
> Theodore Tso schrieb:
>
> (...)
>
> > The following ld_preload can help in some cases.  Mutt has this hack
> > encoded in for maildir directories, which helps.
>
> It doesn't work very reliable for me.
>
> For some reason, it hangs for me sometimes (doesn't remove any files, rm
> -rf just stalls), or segfaults.

You can go the low-tech route (assuming your file names don't have spaces in 
them)

find . -printf "%i %p\n" | sort -n | awk '{print $2}' | xargs rm

>
>
> As most of the ideas here in this thread assume (re)creating a new
> filesystem from scratch - would perhaps playing with
> /proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio help a
> bit?

Probably not.  You're seeking between all the inodes on the box, and probably 
not bound by the memory used.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Tomasz Chmielewski

Theodore Tso schrieb:

(...)


The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.


It doesn't work very reliable for me.

For some reason, it hangs for me sometimes (doesn't remove any files, rm 
-rf just stalls), or segfaults.



As most of the ideas here in this thread assume (re)creating a new 
filesystem from scratch - would perhaps playing with 
/proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio help a bit?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Paul Slootman
On Mon 18 Feb 2008, Andi Kleen wrote:
> On Mon, Feb 18, 2008 at 10:16:32AM -0500, Theodore Tso wrote:
> > On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
> > > I tried to copy that filesystem once (when it was much smaller) with 
> > > "rsync 
> > > -a -H", but after 3 days, rsync was still building an index and didn't 
> > > copy 
> > > any file.
> > 
> > If you're going to copy the whole filesystem don't use rsync! 
> 
> Yes, I managed to kill systems (drive them really badly into oom and
> get very long swap storms) with rsync -H in the past too. Something is very 
> wrong with the rsync implementation of this.

Note that the soon-to-be-released version 3.0.0 of rsync has very much
improved performance here, both in speed and memory usage, thanks to a
new incremental transfer protocol (before it read in the complete file
list, first on the source, then on the target, and then only started to
do the actual work).


Paul Slootman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Vladislav Bolkhovitin

Tomasz Chmielewski wrote:

I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files 
is a decent number.



Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to 
my surprise, it takes longer than I initially expected.



After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can usually 
remove about 5-20 files with moderate performance. I see up to 
5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.



After that, waiting for IO grows to 99%, and disk write speed is down to 
50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).



Is it normal to expect the write speed go down to only few dozens of 
kilobytes/s? Is it because of that many seeks? Can it be somehow 
optimized? The machine has loads of free memory, perhaps it could be 
uses better?



Also, writing big files is very slow - it takes more than 4 minutes to 
write and sync a 655 MB file (so, a little bit more than 1 MB/s) - 
fragmentation perhaps?


It would be really interesting if you try your workload with XFS. In my 
experience, XFS considerably outperforms ext3 on big (> few hundreds MB) 
disks.


Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Vladislav Bolkhovitin

Tomasz Chmielewski wrote:

I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files 
is a decent number.



Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to 
my surprise, it takes longer than I initially expected.



After cache is emptied (echo 3  /proc/sys/vm/drop_caches) I can usually 
remove about 5-20 files with moderate performance. I see up to 
5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.



After that, waiting for IO grows to 99%, and disk write speed is down to 
50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).



Is it normal to expect the write speed go down to only few dozens of 
kilobytes/s? Is it because of that many seeks? Can it be somehow 
optimized? The machine has loads of free memory, perhaps it could be 
uses better?



Also, writing big files is very slow - it takes more than 4 minutes to 
write and sync a 655 MB file (so, a little bit more than 1 MB/s) - 
fragmentation perhaps?


It would be really interesting if you try your workload with XFS. In my 
experience, XFS considerably outperforms ext3 on big ( few hundreds MB) 
disks.


Vlad
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Chris Mason
On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
 Theodore Tso schrieb:

 (...)

  The following ld_preload can help in some cases.  Mutt has this hack
  encoded in for maildir directories, which helps.

 It doesn't work very reliable for me.

 For some reason, it hangs for me sometimes (doesn't remove any files, rm
 -rf just stalls), or segfaults.

You can go the low-tech route (assuming your file names don't have spaces in 
them)

find . -printf %i %p\n | sort -n | awk '{print $2}' | xargs rm



 As most of the ideas here in this thread assume (re)creating a new
 filesystem from scratch - would perhaps playing with
 /proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio help a
 bit?

Probably not.  You're seeking between all the inodes on the box, and probably 
not bound by the memory used.

-chris
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Tomasz Chmielewski

Theodore Tso schrieb:

(...)


The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.


It doesn't work very reliable for me.

For some reason, it hangs for me sometimes (doesn't remove any files, rm 
-rf just stalls), or segfaults.



As most of the ideas here in this thread assume (re)creating a new 
filesystem from scratch - would perhaps playing with 
/proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio help a bit?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Chris Mason
On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
 Chris Mason schrieb:
  On Tuesday 19 February 2008, Tomasz Chmielewski wrote:
  Theodore Tso schrieb:
 
  (...)
 
  The following ld_preload can help in some cases.  Mutt has this hack
  encoded in for maildir directories, which helps.
 
  It doesn't work very reliable for me.
 
  For some reason, it hangs for me sometimes (doesn't remove any files, rm
  -rf just stalls), or segfaults.
 
  You can go the low-tech route (assuming your file names don't have spaces
  in them)
 
  find . -printf %i %p\n | sort -n | awk '{print $2}' | xargs rm

 Why should it make a difference?

It does something similar to Ted's ld preload, sorting the results from 
readdir by inode number before using them.  You will still seek quite a lot 
between the directory entries, but operations on the files themselves will go 
in a much more optimal order.  It might help.


 Does find find filenames/paths faster than rm -r?

 Or is find once/remove once faster than find files/rm files/find
 files/rm files/..., which I suppose rm -r does?

rm -r does removes things in the order that readdir returns.  In your hard 
linked tree (on almost any FS), this will be very random.  The sorting is 
probably the best you can do from userland to optimize the ordering.

-chris

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Tomasz Chmielewski

Chris Mason schrieb:

On Tuesday 19 February 2008, Tomasz Chmielewski wrote:

Theodore Tso schrieb:

(...)


The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

It doesn't work very reliable for me.

For some reason, it hangs for me sometimes (doesn't remove any files, rm
-rf just stalls), or segfaults.


You can go the low-tech route (assuming your file names don't have spaces in 
them)


find . -printf %i %p\n | sort -n | awk '{print $2}' | xargs rm


Why should it make a difference?

Does find find filenames/paths faster than rm -r?

Or is find once/remove once faster than find files/rm files/find 
files/rm files/..., which I suppose rm -r does?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Paul Slootman
On Mon 18 Feb 2008, Andi Kleen wrote:
 On Mon, Feb 18, 2008 at 10:16:32AM -0500, Theodore Tso wrote:
  On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
   I tried to copy that filesystem once (when it was much smaller) with 
   rsync 
   -a -H, but after 3 days, rsync was still building an index and didn't 
   copy 
   any file.
  
  If you're going to copy the whole filesystem don't use rsync! 
 
 Yes, I managed to kill systems (drive them really badly into oom and
 get very long swap storms) with rsync -H in the past too. Something is very 
 wrong with the rsync implementation of this.

Note that the soon-to-be-released version 3.0.0 of rsync has very much
improved performance here, both in speed and memory usage, thanks to a
new incremental transfer protocol (before it read in the complete file
list, first on the source, then on the target, and then only started to
do the actual work).


Paul Slootman
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with rm -r on largish directories.

I added a bit more debugging to it, and it always craps out like this:
opendir dir=0x805ad10((nil))
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64 dir=0x805ad10 pos=1/289/290
Readdir64 dir=0x805ad10 pos=2/289/290
Readdir64 dir=0x805ad10 pos=3/289/290
Readdir64 dir=0x805ad10 pos=4/289/290
...
Readdir64 dir=0x805ad10 pos=287/289/290
Readdir64 dir=0x805ad10 pos=288/289/290
Readdir64 dir=0x805ad10 pos=289/289/290
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64: dirstruct-dp=(nil)
Readdir64: ds=(nil)
Segmentation fault (core dumped)
   
Always.  The rm -r loops over the directory, as show above,

and then tries to re-access entry 0 somehow, at which point
it discovers that it's been NULLed out.

Which is weird, because the local seekdir() was never called,
and the code never zeroed/freed that memory itself
(I've got printfs in there..).

Nulling out the qsort has no effect, and smaller/larger
ALLOC_STEPSIZE values don't seem to matter.

But.. when the entire tree is in RAM (freshly unpacked .tar),
it seems to have no problems with it.  As opposed to an uncached tree.

..

I take back that last point -- it also fails even when the tree *is* cached.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with rm -r on largish directories.

I added a bit more debugging to it, and it always craps out like this:

opendir dir=0x805ad10((nil))

Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64 dir=0x805ad10 pos=1/289/290
Readdir64 dir=0x805ad10 pos=2/289/290
Readdir64 dir=0x805ad10 pos=3/289/290
Readdir64 dir=0x805ad10 pos=4/289/290
...
Readdir64 dir=0x805ad10 pos=287/289/290
Readdir64 dir=0x805ad10 pos=288/289/290
Readdir64 dir=0x805ad10 pos=289/289/290
Readdir64 dir=0x805ad10 pos=0/289/290
Readdir64: dirstruct-dp=(nil)
Readdir64: ds=(nil)
Segmentation fault (core dumped)



Always.  The rm -r loops over the directory, as show above,
and then tries to re-access entry 0 somehow, at which point
it discovers that it's been NULLed out.

Which is weird, because the local seekdir() was never called,
and the code never zeroed/freed that memory itself
(I've got printfs in there..).

Nulling out the qsort has no effect, and smaller/larger
ALLOC_STEPSIZE values don't seem to matter.

But.. when the entire tree is in RAM (freshly unpacked .tar),
it seems to have no problems with it.  As opposed to an uncached tree.

Peculiar.. I wonder where the bug is ?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Paulo Marques

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with rm -r on largish directories.


From looking at the code, I think I've found at least one bug in opendir:
...
			dnew = realloc(dirstruct-dp, 
   dirstruct-max * sizeof(struct dir_s));

...

Shouldn't this be: ...*sizeof(struct dirent_s));?

--
Paulo Marques - www.grupopie.com

Nostalgia isn't what it used to be.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-19 Thread Mark Lord

Paulo Marques wrote:

Mark Lord wrote:

Theodore Tso wrote:
..

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

..

Oddly enough, that same spd_readdir() preload craps out here too
when used with rm -r on largish directories.


 From looking at the code, I think I've found at least one bug in opendir:
...
dnew = realloc(dirstruct-dp,
dirstruct-max * sizeof(struct dir_s));

...

Shouldn't this be: ...*sizeof(struct dirent_s));?

..

Yeah, that's one bug.
Another is that -fd is frequently left uninitialized, yet later used.

Fixing those didn't change the null pointer deaths, though.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 05:16:55PM +0100, Tomasz Chmielewski wrote:
> Theodore Tso schrieb:
>
>> I'd really need to know exactly what kind of operations you were
>> trying to do that were causing problems before I could say for sure.
>> Yes, you said you were removing unneeded files, but how were you doing
>> it?  With rm -r of old hard-linked directories?
>
> Yes, with rm -r.

You should definitely try the spd_readdir hack; that will help reduce
the seek times.  This will probably help on any block group oriented
filesystems, including XFS, etc.

>> How big are the
>> average files involved?  Etc.
>
> It's hard to estimate the average size of a file. I'd say there are not 
> many files bigger than 50 MB.

Well, Ext4 will help for files bigger than 48k.

The other thing that might help for you is using an external journal
on a separate hard drive (either for ext3 or ext4).  That will help
alleviate some of the seek storms going on, since the journal is
written to only sequentially, and putting it on a separate hard drive
will help remove some of the contention on the hard drive.  

I assume that your 1.2 TB filesystem is located on a RAID array; did
you use the mke2fs -E stride option to make sure all of the bitmaps
don't get concentrated on one hard drive spindle?  One of the failure
modes which can happen is if you use a 4+1 raid 5 setup, that all of
the block and inode bitmaps can end up getting laid out on a single
hard drive, so it becomes a bottleneck for bitmap intensive workloads
--- including "rm -rf".  So that's another thing that might be going
on.  If you do a "dumpe2fs", and look at the block numbers for the
block and inode allocation bitmaps, and you find that they are are all
landing on the same physical hard drive, then that's very clearly the
biggest problem given an "rm -rf" workload.  You should be able to see
this as well visually; if one hard drive has its hard drive light
almost constantly on, and the other ones don't have much activity,
that's probably what is happening.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

Theodore Tso schrieb:

Are there better choices than ext3 for a filesystem with lots of hardlinks? 
ext4, once it's ready? xfs?


All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.

I'd really need to know exactly what kind of operations you were
trying to do that were causing problems before I could say for sure.
Yes, you said you were removing unneeded files, but how were you doing
it?  With rm -r of old hard-linked directories?


Yes, with rm -r.



How big are the
average files involved?  Etc.


It's hard to estimate the average size of a file. I'd say there are not 
many files bigger than 50 MB.


Basically, it's a filesystem where backups are kept. Backups are made 
with BackupPC [1].


Imagine a full rootfs backup of 100 Linux systems.

Instead of compressing and writing "/bin/bash" 100 times for each 
separate system, we do it once, and hardlink. Then, keep 40 copies back, 
and you have 4000 hardlinks.


For individual or user files, the number of hardlinks will be smaller of 
course.


The directories I want to remove have usually a structure of a "normal" 
Linux rootfs, nothing special there (other than most of the files will 
have multiple hardlinks).



I noticed using write back helps a tiny bit, but as dm and md don't 
support write barriers, I'm not very eager to use it.



[1] http://backuppc.sf.net
http://backuppc.sourceforge.net/faq/BackupPC.html#some_design_issues



--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:57:25PM +0100, Andi Kleen wrote:
> > Use cp
> > or a tar pipeline to move the files.
> 
> Are you sure cp handles hardlinks correctly? I know tar does,
> but I have my doubts about cp.

I *think* GNU cp does the right thing with --preserve=links.  I'm not
100% sure, though --- like you, probably, I always use tar for moving
or copying directory hierarchies.

   - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
On Mon, Feb 18, 2008 at 10:16:32AM -0500, Theodore Tso wrote:
> On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
> > I tried to copy that filesystem once (when it was much smaller) with "rsync 
> > -a -H", but after 3 days, rsync was still building an index and didn't copy 
> > any file.
> 
> If you're going to copy the whole filesystem don't use rsync! 

Yes, I managed to kill systems (drive them really badly into oom and
get very long swap storms) with rsync -H in the past too. Something is very 
wrong with the rsync implementation of this.

> Use cp
> or a tar pipeline to move the files.

Are you sure cp handles hardlinks correctly? I know tar does,
but I have my doubts about cp.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
> I tried to copy that filesystem once (when it was much smaller) with "rsync 
> -a -H", but after 3 days, rsync was still building an index and didn't copy 
> any file.

If you're going to copy the whole filesystem don't use rsync!  Use cp
or a tar pipeline to move the files.

> Also, as files/hardlinks come and go, it would degrade again.

Yes...

> Are there better choices than ext3 for a filesystem with lots of hardlinks? 
> ext4, once it's ready? xfs?

All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.

I'd really need to know exactly what kind of operations you were
trying to do that were causing problems before I could say for sure.
Yes, you said you were removing unneeded files, but how were you doing
it?  With rm -r of old hard-linked directories?  How big are the
average files involved?  Etc.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:18:23PM +0100, Andi Kleen wrote:
> On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
> > ext3 tries to keep inodes in the same block group as their containing
> > directory.  If you have lots of hard links, obviously it can't really
> > do that, especially since we don't have a good way at mkdir time to
> > tell the filesystem, "Psst!  This is going to be a hard link clone of
> > that directory over there, put it in the same block group".
> 
> Hmm, you think such a hint interface would be worth it?

It would definitely help ext2/3/4.  An interesting question is whether
it would help enough other filesystems that's worth adding.  

> > necessarily removing the dir_index feature.  Dir_index speeds up
> > individual lookups, but it slows down workloads that do a readdir
> 
> But only for large directories right? For kernel source like
> directory sizes it seems to be a general loss.

On my todo list is a hack which does the sorting of directory inodes
by inode number inside the kernel for smallish directories (say, less
than 2-3 blocks) where using the kernel memory space to store the
directory entries is acceptable, and which would speed up dir_index
performance for kernel source-like directory sizes --- without needing
to use the spd_readdir LD_PRELOAD hack.

But yes, right now, if you know that your directories are almost
always going to be kernel source like in size, then omitting dir_index
is probably goint to be a good idea.  

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

Theodore Tso schrieb:

(...)


What has helped a bit was to recreate the file system with -O^dir_index
dir_index seems to cause more seeks.


Part of it may have simply been recreating the filesystem, not
necessarily removing the dir_index feature.


You mean, copy data somewhere else, mkfs a new filesystem, and copy data 
back?


Unfortunately, doing it on a file level is not possible with a 
reasonable amount of time.


I tried to copy that filesystem once (when it was much smaller) with 
"rsync -a -H", but after 3 days, rsync was still building an index and 
didn't copy any file.



Also, as files/hardlinks come and go, it would degrade again.


Are there better choices than ext3 for a filesystem with lots of 
hardlinks? ext4, once it's ready? xfs?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
> ext3 tries to keep inodes in the same block group as their containing
> directory.  If you have lots of hard links, obviously it can't really
> do that, especially since we don't have a good way at mkdir time to
> tell the filesystem, "Psst!  This is going to be a hard link clone of
> that directory over there, put it in the same block group".

Hmm, you think such a hint interface would be worth it?

> 
> > What has helped a bit was to recreate the file system with -O^dir_index
> > dir_index seems to cause more seeks.
> 
> Part of it may have simply been recreating the filesystem, not

Undoubtedly.

> necessarily removing the dir_index feature.  Dir_index speeds up
> individual lookups, but it slows down workloads that do a readdir

But only for large directories right? For kernel source like
directory sizes it seems to be a general loss.

-Andi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 03:03:44PM +0100, Andi Kleen wrote:
> Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
> >
> > Is it normal to expect the write speed go down to only few dozens of
> > kilobytes/s? Is it because of that many seeks? Can it be somehow
> > optimized? 
> 
> I have similar problems on my linux source partition which also
> has a lot of hard linked files (although probably not quite
> as many as you do). It seems like hard linking prevents
> some of the heuristics ext* uses to generate non fragmented
> disk layouts and the resulting seeking makes things slow.

ext3 tries to keep inodes in the same block group as their containing
directory.  If you have lots of hard links, obviously it can't really
do that, especially since we don't have a good way at mkdir time to
tell the filesystem, "Psst!  This is going to be a hard link clone of
that directory over there, put it in the same block group".

> What has helped a bit was to recreate the file system with -O^dir_index
> dir_index seems to cause more seeks.

Part of it may have simply been recreating the filesystem, not
necessarily removing the dir_index feature.  Dir_index speeds up
individual lookups, but it slows down workloads that do a readdir
followed by a stat of all of the files in the workload.  You can work
around this by calling readdir(), sorting all of the entries by inode
number, and then calling open or stat or whatever.  So this can help
out for workloads that are doing find or rm -r on a dir_index
workload.  Basically, it helps for some things, hurts for others.
Once things are in the cache it doesn't matter of course.

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

> Also keeping enough free space is also a good idea because that
> allows the file system code better choices on where to place data.

Yep, that too.

- Ted

/*
 * readdir accelerator
 *
 * (C) Copyright 2003, 2004 by Theodore Ts'o.
 *
 * Compile using the command:
 *
 * gcc -o spd_readdir.so -shared spd_readdir.c -ldl
 *
 * Use it by setting the LD_PRELOAD environment variable:
 * 
 * export LD_PRELOAD=/usr/local/sbin/spd_readdir.so
 *
 * %Begin-Header%
 * This file may be redistributed under the terms of the GNU Public
 * License.
 * %End-Header%
 * 
 */

#define ALLOC_STEPSIZE	100
#define MAX_DIRSIZE	0

#define DEBUG

#ifdef DEBUG
#define DEBUG_DIR(x)	{if (do_debug) { x; }}
#else
#define DEBUG_DIR(x)
#endif

#define _GNU_SOURCE
#define __USE_LARGEFILE64

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

struct dirent_s {
	unsigned long long d_ino;
	long long d_off;
	unsigned short int d_reclen;
	unsigned char d_type;
	char *d_name;
};

struct dir_s {
	DIR	*dir;
	int	num;
	int	max;
	struct dirent_s *dp;
	int	pos;
	int	fd;
	struct dirent ret_dir;
	struct dirent64 ret_dir64;
};

static int (*real_closedir)(DIR *dir) = 0;
static DIR *(*real_opendir)(const char *name) = 0;
static struct dirent *(*real_readdir)(DIR *dir) = 0;
static struct dirent64 *(*real_readdir64)(DIR *dir) = 0;
static off_t (*real_telldir)(DIR *dir) = 0;
static void (*real_seekdir)(DIR *dir, off_t offset) = 0;
static int (*real_dirfd)(DIR *dir) = 0;
static unsigned long max_dirsize = MAX_DIRSIZE;
static num_open = 0;
#ifdef DEBUG
static int do_debug = 0;
#endif

static void setup_ptr()
{
	char *cp;

	real_opendir = dlsym(RTLD_NEXT, "opendir");
	real_closedir = dlsym(RTLD_NEXT, "closedir");
	real_readdir = dlsym(RTLD_NEXT, "readdir");
	real_readdir64 = dlsym(RTLD_NEXT, "readdir64");
	real_telldir = dlsym(RTLD_NEXT, "telldir");
	real_seekdir = dlsym(RTLD_NEXT, "seekdir");
	real_dirfd = dlsym(RTLD_NEXT, "dirfd");
	if ((cp = getenv("SPD_READDIR_MAX_SIZE")) != NULL) {
		max_dirsize = atol(cp);
	}
#ifdef DEBUG
	if (getenv("SPD_READDIR_DEBUG"))
		do_debug++;
#endif
}

static void free_cached_dir(struct dir_s *dirstruct)
{
	int i;

	if (!dirstruct->dp)
		return;

	for (i=0; i < dirstruct->num; i++) {
		free(dirstruct->dp[i].d_name);
	}
	free(dirstruct->dp);
	dirstruct->dp = 0;
}	

static int ino_cmp(const void *a, const void *b)
{
	const struct dirent_s *ds_a = (const struct dirent_s *) a;
	const struct dirent_s *ds_b = (const struct dirent_s *) b;
	ino_t i_a, i_b;
	
	i_a = ds_a->d_ino;
	i_b = ds_b->d_ino;

	if (ds_a->d_name[0] == '.') {
		if (ds_a->d_name[1] == 0)
			i_a = 0;
		else if ((ds_a->d_name[1] == '.') && (ds_a->d_name[2] == 0))
			i_a = 1;
	}
	if (ds_b->d_name[0] == '.') {
		if (ds_b->d_name[1] == 0)
			i_b = 0;
		else if ((ds_b->d_name[1] == '.') && (ds_b->d_name[2] == 0))
			i_b = 1;
	}

	return (i_a - i_b);
}


DIR *opendir(const char *name)
{
	DIR *dir;
	struct dir_s	*dirstruct;
	struct dirent_s *ds, *dnew;
	struct dirent64 *d;
	struct stat st;

	if (!real_opendir)
		setup_ptr();

	DEBUG_DIR(printf("Opendir(%s) (%d open)\n", name, num_open++));
	dir = (*real_opendir)(name);
	if (!dir)
		return NULL;

	dirstruct = malloc(sizeof(struct 

Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
>
> Is it normal to expect the write speed go down to only few dozens of
> kilobytes/s? Is it because of that many seeks? Can it be somehow
> optimized? 

I have similar problems on my linux source partition which also
has a lot of hard linked files (although probably not quite
as many as you do). It seems like hard linking prevents
some of the heuristics ext* uses to generate non fragmented
disk layouts and the resulting seeking makes things slow.

What has helped a bit was to recreate the file system with -O^dir_index
dir_index seems to cause more seeks.

Also keeping enough free space is also a good idea because that
allows the file system code better choices on where to place data.

-Andi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files 
is a decent number.



Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to 
my surprise, it takes longer than I initially expected.



After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can usually 
remove about 5-20 files with moderate performance. I see up to 
5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.



After that, waiting for IO grows to 99%, and disk write speed is down to 
50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).



Is it normal to expect the write speed go down to only few dozens of 
kilobytes/s? Is it because of that many seeks? Can it be somehow 
optimized? The machine has loads of free memory, perhaps it could be 
uses better?



Also, writing big files is very slow - it takes more than 4 minutes to 
write and sync a 655 MB file (so, a little bit more than 1 MB/s) - 
fragmentation perhaps?


+ dd if=/dev/zero of=testfile bs=64k count=1
1+0 records in
1+0 records out
65536 bytes (655 MB) copied, 3,12109 seconds, 210 MB/s
+ sync
0.00user 2.14system 4:06.76elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+883minor)pagefaults 0swaps


# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda  1,2T  697G  452G  61% /mnt/iscsi_backup

# df -i
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/sda154M 20M134M   13% /mnt/iscsi_backup




--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files 
is a decent number.



Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.


Recently I began removing some of unneeded files (or hardlinks) and to 
my surprise, it takes longer than I initially expected.



After cache is emptied (echo 3  /proc/sys/vm/drop_caches) I can usually 
remove about 5-20 files with moderate performance. I see up to 
5000 kB read/write from/to the disk, wa reported by top is usually 20-70%.



After that, waiting for IO grows to 99%, and disk write speed is down to 
50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).



Is it normal to expect the write speed go down to only few dozens of 
kilobytes/s? Is it because of that many seeks? Can it be somehow 
optimized? The machine has loads of free memory, perhaps it could be 
uses better?



Also, writing big files is very slow - it takes more than 4 minutes to 
write and sync a 655 MB file (so, a little bit more than 1 MB/s) - 
fragmentation perhaps?


+ dd if=/dev/zero of=testfile bs=64k count=1
1+0 records in
1+0 records out
65536 bytes (655 MB) copied, 3,12109 seconds, 210 MB/s
+ sync
0.00user 2.14system 4:06.76elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+883minor)pagefaults 0swaps


# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda  1,2T  697G  452G  61% /mnt/iscsi_backup

# df -i
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/sda154M 20M134M   13% /mnt/iscsi_backup




--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
Tomasz Chmielewski [EMAIL PROTECTED] writes:

 Is it normal to expect the write speed go down to only few dozens of
 kilobytes/s? Is it because of that many seeks? Can it be somehow
 optimized? 

I have similar problems on my linux source partition which also
has a lot of hard linked files (although probably not quite
as many as you do). It seems like hard linking prevents
some of the heuristics ext* uses to generate non fragmented
disk layouts and the resulting seeking makes things slow.

What has helped a bit was to recreate the file system with -O^dir_index
dir_index seems to cause more seeks.

Also keeping enough free space is also a good idea because that
allows the file system code better choices on where to place data.

-Andi

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 03:03:44PM +0100, Andi Kleen wrote:
 Tomasz Chmielewski [EMAIL PROTECTED] writes:
 
  Is it normal to expect the write speed go down to only few dozens of
  kilobytes/s? Is it because of that many seeks? Can it be somehow
  optimized? 
 
 I have similar problems on my linux source partition which also
 has a lot of hard linked files (although probably not quite
 as many as you do). It seems like hard linking prevents
 some of the heuristics ext* uses to generate non fragmented
 disk layouts and the resulting seeking makes things slow.

ext3 tries to keep inodes in the same block group as their containing
directory.  If you have lots of hard links, obviously it can't really
do that, especially since we don't have a good way at mkdir time to
tell the filesystem, Psst!  This is going to be a hard link clone of
that directory over there, put it in the same block group.

 What has helped a bit was to recreate the file system with -O^dir_index
 dir_index seems to cause more seeks.

Part of it may have simply been recreating the filesystem, not
necessarily removing the dir_index feature.  Dir_index speeds up
individual lookups, but it slows down workloads that do a readdir
followed by a stat of all of the files in the workload.  You can work
around this by calling readdir(), sorting all of the entries by inode
number, and then calling open or stat or whatever.  So this can help
out for workloads that are doing find or rm -r on a dir_index
workload.  Basically, it helps for some things, hurts for others.
Once things are in the cache it doesn't matter of course.

The following ld_preload can help in some cases.  Mutt has this hack
encoded in for maildir directories, which helps.

 Also keeping enough free space is also a good idea because that
 allows the file system code better choices on where to place data.

Yep, that too.

- Ted

/*
 * readdir accelerator
 *
 * (C) Copyright 2003, 2004 by Theodore Ts'o.
 *
 * Compile using the command:
 *
 * gcc -o spd_readdir.so -shared spd_readdir.c -ldl
 *
 * Use it by setting the LD_PRELOAD environment variable:
 * 
 * export LD_PRELOAD=/usr/local/sbin/spd_readdir.so
 *
 * %Begin-Header%
 * This file may be redistributed under the terms of the GNU Public
 * License.
 * %End-Header%
 * 
 */

#define ALLOC_STEPSIZE	100
#define MAX_DIRSIZE	0

#define DEBUG

#ifdef DEBUG
#define DEBUG_DIR(x)	{if (do_debug) { x; }}
#else
#define DEBUG_DIR(x)
#endif

#define _GNU_SOURCE
#define __USE_LARGEFILE64

#include stdio.h
#include unistd.h
#include sys/types.h
#include sys/stat.h
#include stdlib.h
#include string.h
#include dirent.h
#include errno.h
#include dlfcn.h

struct dirent_s {
	unsigned long long d_ino;
	long long d_off;
	unsigned short int d_reclen;
	unsigned char d_type;
	char *d_name;
};

struct dir_s {
	DIR	*dir;
	int	num;
	int	max;
	struct dirent_s *dp;
	int	pos;
	int	fd;
	struct dirent ret_dir;
	struct dirent64 ret_dir64;
};

static int (*real_closedir)(DIR *dir) = 0;
static DIR *(*real_opendir)(const char *name) = 0;
static struct dirent *(*real_readdir)(DIR *dir) = 0;
static struct dirent64 *(*real_readdir64)(DIR *dir) = 0;
static off_t (*real_telldir)(DIR *dir) = 0;
static void (*real_seekdir)(DIR *dir, off_t offset) = 0;
static int (*real_dirfd)(DIR *dir) = 0;
static unsigned long max_dirsize = MAX_DIRSIZE;
static num_open = 0;
#ifdef DEBUG
static int do_debug = 0;
#endif

static void setup_ptr()
{
	char *cp;

	real_opendir = dlsym(RTLD_NEXT, opendir);
	real_closedir = dlsym(RTLD_NEXT, closedir);
	real_readdir = dlsym(RTLD_NEXT, readdir);
	real_readdir64 = dlsym(RTLD_NEXT, readdir64);
	real_telldir = dlsym(RTLD_NEXT, telldir);
	real_seekdir = dlsym(RTLD_NEXT, seekdir);
	real_dirfd = dlsym(RTLD_NEXT, dirfd);
	if ((cp = getenv(SPD_READDIR_MAX_SIZE)) != NULL) {
		max_dirsize = atol(cp);
	}
#ifdef DEBUG
	if (getenv(SPD_READDIR_DEBUG))
		do_debug++;
#endif
}

static void free_cached_dir(struct dir_s *dirstruct)
{
	int i;

	if (!dirstruct-dp)
		return;

	for (i=0; i  dirstruct-num; i++) {
		free(dirstruct-dp[i].d_name);
	}
	free(dirstruct-dp);
	dirstruct-dp = 0;
}	

static int ino_cmp(const void *a, const void *b)
{
	const struct dirent_s *ds_a = (const struct dirent_s *) a;
	const struct dirent_s *ds_b = (const struct dirent_s *) b;
	ino_t i_a, i_b;
	
	i_a = ds_a-d_ino;
	i_b = ds_b-d_ino;

	if (ds_a-d_name[0] == '.') {
		if (ds_a-d_name[1] == 0)
			i_a = 0;
		else if ((ds_a-d_name[1] == '.')  (ds_a-d_name[2] == 0))
			i_a = 1;
	}
	if (ds_b-d_name[0] == '.') {
		if (ds_b-d_name[1] == 0)
			i_b = 0;
		else if ((ds_b-d_name[1] == '.')  (ds_b-d_name[2] == 0))
			i_b = 1;
	}

	return (i_a - i_b);
}


DIR *opendir(const char *name)
{
	DIR *dir;
	struct dir_s	*dirstruct;
	struct dirent_s *ds, *dnew;
	struct dirent64 *d;
	struct stat st;

	if (!real_opendir)
		setup_ptr();

	DEBUG_DIR(printf(Opendir(%s) (%d open)\n, name, num_open++));
	dir = (*real_opendir)(name);
	if (!dir)
		return NULL;

	dirstruct = 

Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
 ext3 tries to keep inodes in the same block group as their containing
 directory.  If you have lots of hard links, obviously it can't really
 do that, especially since we don't have a good way at mkdir time to
 tell the filesystem, Psst!  This is going to be a hard link clone of
 that directory over there, put it in the same block group.

Hmm, you think such a hint interface would be worth it?

 
  What has helped a bit was to recreate the file system with -O^dir_index
  dir_index seems to cause more seeks.
 
 Part of it may have simply been recreating the filesystem, not

Undoubtedly.

 necessarily removing the dir_index feature.  Dir_index speeds up
 individual lookups, but it slows down workloads that do a readdir

But only for large directories right? For kernel source like
directory sizes it seems to be a general loss.

-Andi

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:18:23PM +0100, Andi Kleen wrote:
 On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
  ext3 tries to keep inodes in the same block group as their containing
  directory.  If you have lots of hard links, obviously it can't really
  do that, especially since we don't have a good way at mkdir time to
  tell the filesystem, Psst!  This is going to be a hard link clone of
  that directory over there, put it in the same block group.
 
 Hmm, you think such a hint interface would be worth it?

It would definitely help ext2/3/4.  An interesting question is whether
it would help enough other filesystems that's worth adding.  

  necessarily removing the dir_index feature.  Dir_index speeds up
  individual lookups, but it slows down workloads that do a readdir
 
 But only for large directories right? For kernel source like
 directory sizes it seems to be a general loss.

On my todo list is a hack which does the sorting of directory inodes
by inode number inside the kernel for smallish directories (say, less
than 2-3 blocks) where using the kernel memory space to store the
directory entries is acceptable, and which would speed up dir_index
performance for kernel source-like directory sizes --- without needing
to use the spd_readdir LD_PRELOAD hack.

But yes, right now, if you know that your directories are almost
always going to be kernel source like in size, then omitting dir_index
is probably goint to be a good idea.  

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
 I tried to copy that filesystem once (when it was much smaller) with rsync 
 -a -H, but after 3 days, rsync was still building an index and didn't copy 
 any file.

If you're going to copy the whole filesystem don't use rsync!  Use cp
or a tar pipeline to move the files.

 Also, as files/hardlinks come and go, it would degrade again.

Yes...

 Are there better choices than ext3 for a filesystem with lots of hardlinks? 
 ext4, once it's ready? xfs?

All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.

I'd really need to know exactly what kind of operations you were
trying to do that were causing problems before I could say for sure.
Yes, you said you were removing unneeded files, but how were you doing
it?  With rm -r of old hard-linked directories?  How big are the
average files involved?  Etc.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

Theodore Tso schrieb:

(...)


What has helped a bit was to recreate the file system with -O^dir_index
dir_index seems to cause more seeks.


Part of it may have simply been recreating the filesystem, not
necessarily removing the dir_index feature.


You mean, copy data somewhere else, mkfs a new filesystem, and copy data 
back?


Unfortunately, doing it on a file level is not possible with a 
reasonable amount of time.


I tried to copy that filesystem once (when it was much smaller) with 
rsync -a -H, but after 3 days, rsync was still building an index and 
didn't copy any file.



Also, as files/hardlinks come and go, it would degrade again.


Are there better choices than ext3 for a filesystem with lots of 
hardlinks? ext4, once it's ready? xfs?



--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Andi Kleen
On Mon, Feb 18, 2008 at 10:16:32AM -0500, Theodore Tso wrote:
 On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
  I tried to copy that filesystem once (when it was much smaller) with rsync 
  -a -H, but after 3 days, rsync was still building an index and didn't copy 
  any file.
 
 If you're going to copy the whole filesystem don't use rsync! 

Yes, I managed to kill systems (drive them really badly into oom and
get very long swap storms) with rsync -H in the past too. Something is very 
wrong with the rsync implementation of this.

 Use cp
 or a tar pipeline to move the files.

Are you sure cp handles hardlinks correctly? I know tar does,
but I have my doubts about cp.

-Andi
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 04:57:25PM +0100, Andi Kleen wrote:
  Use cp
  or a tar pipeline to move the files.
 
 Are you sure cp handles hardlinks correctly? I know tar does,
 but I have my doubts about cp.

I *think* GNU cp does the right thing with --preserve=links.  I'm not
100% sure, though --- like you, probably, I always use tar for moving
or copying directory hierarchies.

   - Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Tomasz Chmielewski

Theodore Tso schrieb:

Are there better choices than ext3 for a filesystem with lots of hardlinks? 
ext4, once it's ready? xfs?


All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.

I'd really need to know exactly what kind of operations you were
trying to do that were causing problems before I could say for sure.
Yes, you said you were removing unneeded files, but how were you doing
it?  With rm -r of old hard-linked directories?


Yes, with rm -r.



How big are the
average files involved?  Etc.


It's hard to estimate the average size of a file. I'd say there are not 
many files bigger than 50 MB.


Basically, it's a filesystem where backups are kept. Backups are made 
with BackupPC [1].


Imagine a full rootfs backup of 100 Linux systems.

Instead of compressing and writing /bin/bash 100 times for each 
separate system, we do it once, and hardlink. Then, keep 40 copies back, 
and you have 4000 hardlinks.


For individual or user files, the number of hardlinks will be smaller of 
course.


The directories I want to remove have usually a structure of a normal 
Linux rootfs, nothing special there (other than most of the files will 
have multiple hardlinks).



I noticed using write back helps a tiny bit, but as dm and md don't 
support write barriers, I'm not very eager to use it.



[1] http://backuppc.sf.net
http://backuppc.sourceforge.net/faq/BackupPC.html#some_design_issues



--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: very poor ext3 write performance on big filesystems?

2008-02-18 Thread Theodore Tso
On Mon, Feb 18, 2008 at 05:16:55PM +0100, Tomasz Chmielewski wrote:
 Theodore Tso schrieb:

 I'd really need to know exactly what kind of operations you were
 trying to do that were causing problems before I could say for sure.
 Yes, you said you were removing unneeded files, but how were you doing
 it?  With rm -r of old hard-linked directories?

 Yes, with rm -r.

You should definitely try the spd_readdir hack; that will help reduce
the seek times.  This will probably help on any block group oriented
filesystems, including XFS, etc.

 How big are the
 average files involved?  Etc.

 It's hard to estimate the average size of a file. I'd say there are not 
 many files bigger than 50 MB.

Well, Ext4 will help for files bigger than 48k.

The other thing that might help for you is using an external journal
on a separate hard drive (either for ext3 or ext4).  That will help
alleviate some of the seek storms going on, since the journal is
written to only sequentially, and putting it on a separate hard drive
will help remove some of the contention on the hard drive.  

I assume that your 1.2 TB filesystem is located on a RAID array; did
you use the mke2fs -E stride option to make sure all of the bitmaps
don't get concentrated on one hard drive spindle?  One of the failure
modes which can happen is if you use a 4+1 raid 5 setup, that all of
the block and inode bitmaps can end up getting laid out on a single
hard drive, so it becomes a bottleneck for bitmap intensive workloads
--- including rm -rf.  So that's another thing that might be going
on.  If you do a dumpe2fs, and look at the block numbers for the
block and inode allocation bitmaps, and you find that they are are all
landing on the same physical hard drive, then that's very clearly the
biggest problem given an rm -rf workload.  You should be able to see
this as well visually; if one hard drive has its hard drive light
almost constantly on, and the other ones don't have much activity,
that's probably what is happening.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/