i have a large zfs backup server hosting millions of files being transferred to 
via rsync. most of the runtime is taken from rsync comparing files by 
timestamp. metadata is being re-read again and again on every backup (there is 
only few incremental transfer).  that puts unnecessary stress on the disks and 
is so much slower, so please consider adding an optimization param like this 
which helps keeping metadata completely in l2arc, as it`s too big to fit in 
ram. 

anyhow, i`m curious why the metadata in ram takes that much space at all. how 
much metadata is kept in ram for each file? it seems it`s much more than i 
would guess. i would guess it`s <<1kb/file, but from what i observe it`s more...

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/189#issuecomment-350455544
------------------------------------------
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf640c727e00d72ef-M67b31ec6eb6479fdeca3e0c5
Powered by Topicbox: https://topicbox.com

Reply via email to