On Mon, Jul 24, 2017 at 11:40:17PM +0800, Anand Jain wrote: > > > Eg. files that are already compressed would increase the cpu consumption > > with compress-force, while they'd be hopefully detected as > > incompressible with 'compress' and clever heuristics. So the NOCOMPRESS > > bit would better reflect the status of the file. > > current NOCOMPRESS is based on trial and error method and is more > accurate than heuristic also loss of cpu power is only one time ?
Curreently, force-compress beats everything, so even a file with NOCOMPRESS will be compressed, all new writes will be passed to the compression and stored uncompressed eventually. Each time the compression code will run and fail, so it's not one time. Although you can say it's more 'accurate', it's also more expensive. > May be the only opportunity that heuristic can facilitate is at the > logic to monitor and reset the NOCOMPRESS, as of now there is no > such a logic. The heurictic can be made adaptive, and examine data even for NOCOMPRESS files, but that's a few steps ahead of where we are now. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html