On 28/07/2017 00:36, David Sterba wrote:
On Mon, Jul 24, 2017 at 11:40:17PM +0800, Anand Jain wrote:
Eg. files that are already compressed would increase the cpu consumption
with compress-force, while they'd be hopefully detected as
incompressible with 'compress' and clever heuristics. So the NOCOMPRESS
bit would better reflect the status of the file.
I thought 'compress' in above, is the compress option. Ah you mean
to say compression algo .. got it. Right compress-force for
incompressible-data is very expensive.
And its also true that compress option for incompressible data is
not at all expensive and its only one time.
current NOCOMPRESS is based on trial and error method and is more
accurate than heuristic also loss of cpu power is only one time ?
Curreently, force-compress beats everything, so even a file with
NOCOMPRESS will be compressed, all new writes will be passed to the
compression and stored uncompressed eventually.
It makes sense to me when you replace NOCOMPRESS with
incompressible-data in the above statement. As in my understanding..
You will never have a file with NOCOMPRESS flag if compress-force
option is used.
Each time they
compression code will run and fail, so it's not one time.
Although you can say it's more 'accurate', it's also more expensive.
yes. Expensive only in compress-force.
May be the only opportunity that heuristic can facilitate is at the
logic to monitor and reset the NOCOMPRESS, as of now there is no
such a logic.
The heurictic can be made adaptive, and examine data even for NOCOMPRESS
files, but that's a few steps ahead of where we are now.
Nice.
Thanks, Anand
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html