On 9/2/05, Matt Stegman <[EMAIL PROTECTED]> wrote: > As far as repacking goes, I believe xfs_fsr refuses to defrag files if too > little space is free. If it's too hard to program a repacker to operate > efficiently with only a few megabytes free, simply program it to exit with > error when less than 5% disk is free.
I'd rather leave the repacker running inefficiently if it'll mean it'll run more efficiently in the future. [I believe Windows 9x's repacker could get away with having just one free cluster if necessary, although it'd have been painfully slow.] Sometimes, I can't clear off files unless I archive them, but they won't archive unless they read fast enough (and fragmentation slows them down just enough so they won't read fast enough - due to weird time constaints that are built into the program that shouldn't be there). It's possible to defragment tons of really small files with less than 5%, and that number changes with the size of the HD (you can't use an arbirtrary number like 15%, like Windows XP, because that varies heavily -- even if you had 50% free, if there were no two continous free clusters, you couldn't defragment with the defragmenter in XP -- I don't want this kind of behaviour as the only way to do repack things in case I end up in a weird spot.) That said, I'd be half-satisfied if a repacker is released months or years earlier because it doesn't efficiently handle weird cases (so long as it remembers to try and repack free space too). As for the 5% error; a warning is safer. Either that, or put in a force option to get around the "error". -- ~Mike - Just my two cents - No man is an island, and no man is unable.
