I have a 1.6TB jfs partition (Linux) that is roughly a year old.  In
this time, the write speed has managed to drop to 5MB/sec and it has
become nearly unusable.  I mainly use the RAID for mythtv, but
recently it has become too slow for capturing.

filefrag reports some 3GB files with 90,000 extents next to 3GB files
with 18 extents.  Many files with thousands of extents.

I understand there are no defrag tools available for Linux, and I
would rather not back the data up and restore as it's important, but
just not important enough to warrant the time spent.

Is there another way I can deal with these files?

I copied a file with 3000 extents off the partition and onto a spare,
deleted the original and copied the file back and ended up with 1100
extents.  An improvement, but would this method ever get performance
back to a usable level?  What if I were to fill the remaining space
with dd after deleting the original/before copying it back?  Or should
I concentrate on freeing up as much space as possible before copying
any files to/from?

Thanks,
Jason

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to