On Wed, Jun 04, 2014 at 10:00:06AM -0400, Chris Mason wrote:
> I have a slightly different reason for holding off on these.  Disk
> format changes are forever, and we need a really strong use case for
> pulling them in.

The format upgrade is inevitable for full bidirectional interoperability
of filesystems with non-pagesized sectorsize and compression. At the
moment this is not possible even without compression, but patches are on
the way.

> With that said, thanks for spending all of the time on this.  Pulling in
> Dave's idea to stream larger compression blocks through lzo (or any new
> alg) might be enough to push performance much higher, and better show
> case the differences between new algorithms.

The space savings and speed gains can be measured outside of btrfs.
>From the past numbers I see that 4k->64k chunk brings another 5-10% of
ratio and the de/compression speed is not worse.

Bigger chunks do not improve that much, but the overhead for assembling
the linear mappings would be decreased.

> The whole reason I chose zlib originally was because its streaming
> interface was a better fit for how FS IO worked.

Right, zlib has the streaming interface and accepts randomly scattered
blocks, but the others do not. LZ4 has a streaming extension proposed,
but I haven't looked at it closely whether it satisfies our constraints.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to