There is a difference

Arc/L2Arc only caches datablocks on a read last/ read most optimazation, not whole files and helps only on reads. On a hybrid tiering pool with a fast special vdev you gain on read and writes and with whole user selected files or large VMs

I agree that rule or filebased data tiering with a data move between tier1 and tier2 storage is more complex than simple caching or the default ZFS tiering based on data structures but let's look at worst case

A data move between tier1 and tier2 is an action like
increase recsize && rename file ex to file.tier && copy file.tier to file && delete file.tier

This means every action must succeed for the next in the chain to happen.

If something happens during copy the final delete of the file.tier should not happen so you have a renamed file.tier instead file. In worst case use snaps. If the special vdev is full, new files land on regular hd vdev/tier2 instead fast ssd/tier1. Its not nice that I see no method to detect such files. Only solution is to force a manual move of such files to tier1 when there is free space again on tier1 storage. A problem could be open files without proper locking. If this is extremely critical you may need to do tiering in a low busy time with shares disabled.

This is not a use case for every user but for special use cased where you want to move files transparently between fast/expensive tier1 and slow/cheaper tier2 storage with paths remain intact. During the move files are not accessable/locked.

Gea

On Sun, 12 Nov 2023 at 03:33, Gea <[email protected]> wrote:
Have i missed something?
This seems complex and potentially error prone.  How is this superior
to using an L2ARC cache device to increase access speed to frequently
accessed data?


Cheers.


------------------------------------------
illumos: illumos-discuss
Permalink: 
https://illumos.topicbox.com/groups/discuss/Ta9815f4d6c901308-M095380cc095dcb5a1446cd19
Delivery options: https://illumos.topicbox.com/groups/discuss/subscription

Reply via email to