> I assume compressed blocks can be larger than PAGE_CACHE_SIZE?  This suffers
> from the rather obvious inefficiency that you decompress a big block >
> PAGE_CACHE_SIZE, but only copy one PAGE_CACHE_SIZE page out of it.  If
> multiple files are being read simultaneously (a common occurrence), then
> each is going to replace your one cached uncompressed block
> (sbi->current_cnode_index), leading to decompressing the same blocks over
> and over again on sequential file access.
>
> readpage file A, index 1 -> decompress block X
> readpage file B, index 1 -> decompress block Y (replaces X)
> readpage file A, index 2 -> repeated decompress of block X (replaces Y)
> readpage file B, index 2 -> repeated decompress of block Y (replaces X)
>
> and so on.

Yep.  Been thinking about optimizing it.  So far it hasn't been an
issue for my customers.  Most fs traffic being on the XIP pages.  Once
I get a good automated performance test up we'll probably look into
something to improve this.
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to