Chris Mason wrote:
On Wed, 2008-10-22 at 15:06 -0500, Steven Pratt wrote:
We have set up a new page which is intended mainly for tracking the performance of BTRFS, but in doing so we are testing other filesystems as well (ext3, ext4, xfs and jfs). Thought some people here might find the results useful.

I think I understand the bad read performance in btrfs.  I was forcing a
tiny max readahead size.

The current git tree has fixes for it, along with a ton of new code.
Results for the the new (Git pull on 10/29) on the raid system are complete. Sequential read with a small number of threads has increased dramatically, however on large number of threads (128) we see a large dropoff in performance from before, as well as a huge spike in CPU utilization. A quick look at the oprofile reveals some new functions at the top which seem really out of place on a read only workload.

samples  %        image name               app name                 symbol name
13752215 23.8658  btrfs.ko                 btrfs                    
alloc_extent_state
12840571 22.2837  btrfs.ko                 btrfs                    
free_extent_state
9658945  16.7623  vmlinux-2.6.27           vmlinux-2.6.27           crc32c_le

Both of the extent_state function have overtaken the crc function at the top of the profile. Why would we be messing with extent states on read only workload?

Also of note is that both the mail server and create tests have take a significant hit as well. Create is off 30-50% from previous git tree and mail server is off by about 25%. Random write is off slightly, while random read is pretty much unchanged.

Full results here:
http://btrfs.boxacle.net/repository/raid/October31GIT/October31GIT-vs-October20GIT.html

We are having some HW issues on the single disk system, so no results there yet.

Steve


-chris


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to