Thanks! for commenting Qu.

 As you are working near these codes appreciate any
 code review comments.

+1 here, but in fact, it's easy to deal with.
As long as not implement encryption as a compression method.

Just like inband dedup, we use the following method to support dedup and
compression while still using most of CPU cores like compression:

Old compression only implement:
inode needs compress will go into async_cow_start()
async_cow_start()
   |- compress_file_range()

Compression with dedup implement:
inode needs compress *OR* dedup will go into async_cow_start()
async_cow_start()
   |
   |- if (!inode_need_dedup())
   |  |- compress_file_range()  <<Just as normal one
   |     |- btrfs_compress_pages()
   |     |- add_async_extent()
   |
   |- else
      |- hash_file_range()      <<Calculate file hashes
         |- normal dedup hash
         |- if (inode_need_compress())
         |  |- btrfs_compress_pages()
         |- add_async_extent()

Although not the most elegant method, but it shows that we can
co-operate compress and encrypt.

However the most elegant method, is to rework current cow_file_range()
and its variant, to an unified btrfs internal API.

 Thanks for this. Right. Currently there is no elegant way of doing it.
 Tried a bit of juggles. Unless rework.

 But I am confused. Are you suggesting we should cascade compress engine
 and then an encryption engine ? Austin said against such an approach.
 And I have included it under limitation section just to mention, and
 what's the idea if at all someone seeks such a configurations, which
 is to use engine which provides both instead.

Thanks, Anand
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to