Hi!
That O_DIRECT worked pretty well:

Discontinuities before: 1445
Discontinuities after: 398

That is on a directory that the ordinary 'copy' could not defragment more. I
had a little trouble aligning buffers and writing files whose length was not
a multiple of the required alignment. Do you think the improvement is
JFS-specific or does other filesystems work in the same way?
Is it correct that on JFS the ideal number of extents should be 1 for each
file, independent  on file size?

//Simon


On 10/29/07, Dave Kleikamp <[EMAIL PROTECTED]> wrote:
>
> On Mon, 2007-10-29 at 22:26 +0100, Simon Lundell wrote:
> >
> >
> > On 10/25/07, Dave Kleikamp <[EMAIL PROTECTED]> wrote:
> >         On Thu, 2007-10-25 at 11:18 +0200, Simon Lundell wrote:
> >         >
> >         >
> >         > On 10/23/07, Steve Costaras <[EMAIL PROTECTED]> wrote:
> >         >         Yes, I use the filefrag tool from Theodor
> >         Tso' (ext2/3) which
> >         >         works on pretty much any filesystem under linux.
> >         That defrag
> >         >         tool you mentioned would work as well (it's just
> >         copying
> >         >         files) I don't like how it doesn't check for file
> >         integrity
> >         >         though.
> >         >
> >         >         Here's a fast one I through together ages ago which
> >         works to
> >         >         some extent (no pun intended. ;)  )   I never got it
> >         to take a
> >         >         command-line argument as to which directory / mount
> >         point to
> >         >         start on (it just runs from the current directory on
> >         down).
> >         >         But that's easy to change (must have been
> >         interrupted).
> >         >         Anyway do with it what you will.  :)
> >         >
> >         >
> >         > What is the the best way to write a file with respect to
> >         > fragmentation? I guess that its best that the filesystem
> >         knows the
> >         > final size in advance, so that it can allocate it in as few
> >         extents as
> >         > possilbe.
> >
> >         When doing a large write, jfs SHOULD at the very least
> >         allocate a
> >         contiguous extent large enough for the data being
> >         written.  Currently it
> >         does not.  It allocates on page at a time.  So on a fragmented
> >         file
> >         system, the file can be quite fragmented.  I have plans to
> >         improve this,
> >         but I haven't gotten to it yet.
> >
> > I've been experimenting with a general defragmenter like the script
> > posted in this thread. The basic algorithm is to rewrite the file, and
> > hope that the copy is less fragmented than the original.  What is the
> > best way of copying/rewriting a file with regards to fragmentation?
> > Will cp do the trick or should one use something else?
>
> Actually, I believe if you open a file with O_DIRECT, and write in large
> chunks, jfs will allocate blocks in groups larger than the page size, so
> you should get better results.  I'm talking off the top of my head and
> haven't verified it, but it is probably worth a try.
>
> Shaggy
> --
> David Kleikamp
> IBM Linux Technology Center
>
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to