On May 30, 2017 1:11:51 PM GMT+02:00, Alexander Pyhalov <[email protected]> wrote:
>Hello.
>I have a question on l2arc supporting algorithms. Precisely, if I have 
>some data in l2arc and now large sequential read comes, does it mean 
>that l2arc will be rewritten? For example, if I have 200GB L2ARC and
>one 
>reads 1TB of data, does it mean that after this L2ARC is filled with 
>this (perhaps, rarely used) data? Is there any way to protect from this
>
>(perhaps, somehow marc some data to avoid its eviction from cache)?

Both ARC and L2ARC strike their balance of caching some most-frequently and 
most-recently used blocks; I believe you can still tune the variables to assign 
the preferred inclination, or stick to defaults.

Some people abuse the MFU part by reading metadata (find /mydir > /dev/null) or 
data (tar cf - /mydir > /dev/null) from crontab, to keep those blocks hot and 
cached. Others recommended to not do this, and let the system figure out which 
datapaths are really hot - humans might mis-estimate that.

One case from the lists involved IIRC caching metadata of homedirs or areas to 
backup, so these do not lag when real users logged in or backups began to stat 
FS objects to decide if some should go into incremental archives; without this 
on their huge system the lag for initial access could be on the order of 
minutes, uncool for interactive use ;)

Jim
--
Typos courtesy of K-9 Mail on my Redmi Android

------------------------------------------
illumos-discuss
Archives: 
https://illumos.topicbox.com/groups/discuss/discussions/T24178558a3ef35b5-M03daa8d091a6cd640e4e4c23
Powered by Topicbox: https://topicbox.com

Reply via email to