On Tue, Mar 22, 2016 at 09:35:50AM +0800, Qu Wenruo wrote:
> From: Wang Xiaoguang <wangxg.f...@cn.fujitsu.com>
> 
> The basic idea is also calculate hash before compression, and add needed
> members for dedupe to record compressed file extent.
> 
> Since dedupe support dedupe_bs larger than 128K, which is the up limit
> of compression file extent, in that case we will skip dedupe and prefer
> compression, as in that size dedupe rate is low and compression will be
> more obvious.
> 
> Current implement is far from elegant. The most elegant one should split
> every data processing method into its own and independent function, and
> have a unified function to co-operate them.

I'd leave this one out for now, it looks like we need to refine the
pipeline from dedup -> compression and this is just more to carry around
until the initial support is in.  Can you just decline to dedup
compressed extents for now?

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to