2011/12/12 Alexandre Oliva <ol...@lsd.ic.unicamp.br>:
> On Dec  7, 2011, Christian Brunner <c...@muc.de> wrote:
>
>> With this patch applied I get much higher write-io values than without
>> it. Some of the other patches help to reduce the effect, but it's
>> still significant.
>
>> iostat on an unpatched node is giving me:
>
>> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s
>> avgrq-sz avgqu-sz   await  svctm  %util
>> sda             105.90     0.37   15.42   14.48  2657.33   560.13
>> 107.61     1.89   62.75   6.26  18.71
>
>> while on a node with this patch it's
>> sda             128.20     0.97   11.10   57.15  3376.80   552.80
>> 57.58    20.58  296.33   4.16  28.36
>
>
>> Also interesting, is the fact that the average request size on the
>> patched node is much smaller.
>
> That's probably expected for writes, as bitmaps are expected to be more
> fragmented, even if used only for metadata (or are you on SSD?)
>

It's a traditional hardware RAID5 with spinning disks. - I would
accept this if the writes would start right after the mount, but in
this case it takes a few hours until the writes increase. Thats why
I'm allmost certain that something is still wrong.

> Bitmaps are just a different in-memory (and on-disk-cache, if enabled)
> representation of free space, that can be far more compact: one bit per
> disk block, rather than an extent list entry.  They're interchangeable
> otherwise, it's just that searching bitmaps for a free block (bit) is
> somewhat more expensive than taking the next entry from a list, but you
> don't want to use up too much memory with long lists of
> e.g. single-block free extents.

Thanks for the explanation! I'll try to insert some debuging code,
once my test server is ready.

Christian
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to