It's hard to say.  Basically, I manually recopied files with over
10,000 extents until I got one with 100-200 extents, and then deleted
the originals.  Rinse-repeat over most of the drive.  Files with tens
of thousands of extents that I didn't really need, I simply deleted.
Oddly, I had a few directories scattered about with good write speeds,
up to a certain point (i.e., 4-10gb) -- but I could never get the root
of the filesystem above 6 MB/sec or so.

After getting the drive down to 40% usage, write speed was still just
as slow..  and this is where my problem with jfs lies.  That 60% space
available should allow for contiguous space large enough that a small
(1-4MB) file should take very few extents in any case.  i.e., jfs
should be smart enough to place a file where it fits well.  In my
experience on this particular filesystem, using dd to write a single
4095 byte file would take *three* extents every 10th write.  This is
on a filesystem with 4096 byte blocks (correct me if I'm wrong), mind
you.  Something very drastic would have surely had to occur to get jfs
to respond this way.

A new development is that it looks like this situation wasn't limited
to my 1.7tb partition.  It's also affecting my /usr and /home
partitions.  This means that I could probably come up with a partition
image (dd) that is of reasonable size to compress and transfer
(~1-2GB) if someone would like me to save a copy for inspection.

The jfsCommit process seems to be the doing the blocking.

Jason

On 1/31/07, Rob <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I don't have any fraged volumes to try this on, but do you have
> any experience with shake: http://vleu.net/shake/
>
> Would it have helped in your case?
>
> Rob
>
> Jason Fisher wrote:
> > I ended up deleting enough to copy the rest elsewhere and start over.
> > I'm using ext3 for the time being, as it can shrink (and subsequently
> > let me replace it with another filesystem if necessary) -- but my
> > speeds are back up to 130MB/sec+, where they should be.
> >
> > I still have jfs and my / and /home, and they too are exhibiting the
> > same slow behavior.  I haven't ran filefrag on it yet, but something
> > tells me the problem is there too.
> >
> > It seems like running it close to 100% usage for any amount of time is
> > likely to start a cascading fragmentation effect.  Perhaps there was a
> > bug in a jfs implementation I used once, or a fsck.jfs run messed
> > things up.  100,000+ is a bit absurd though and should never be
> > reached in any real world conditions.
> >
> > On 1/28/07, Christian Kujau <[EMAIL PROTECTED]> wrote:
> >> On Sat, 20 Jan 2007, Jason Fisher wrote:
> >>> I'm down to 75% usage now and write speeds in the root of that
> >>> filesystem are still 5MB/sec.  I do have a directory that gives me
> >>> 180MB/sec writes -- I suppose it has a big set of contiguous blocks
> >>> assigned to it.
> >> Interesting thread, really: I still wonder how other filesystems deal
> >> with fragmentation over time. Sure, it's good to have >5% of free space,
> >> this will be exceeded at peak times and the fs might be running at <1%
> >> of free space until the bofh makes room again. but the fragmentation was
> >> heavily increased during this "1% free period".
> >> It's hard to reproduce too, because most fragmentation really
> >> happens over time, methinks.
> >>
> >> For the record: I have a 30GB jfs on a single scsi disk, created
> >> back in 04/2005 which acts as some kind of "scratch partition", so many
> >> small files and at other times bigger dvd-images get written to the
> >> disk. Often the partition has ~5% of free space, sometimes less. I just
> >> dd'ed a DVD image from the partition to /dev/null...18MB/s it said...
> >>
> >> Christian.
>

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to