On 9/28/2016 13:42, trafdev wrote:
> Thanks Jov and Karl!
>
> What do you think about:
>
> primarycache=all
>
> for SELECT queries over same data sets?
>
>> Yes.
>>
>> Non-default stuff...
>>
>> dbms/ticker-9.5  compressratio         1.88x                  -
>> dbms/ticker-9.5  mounted               yes                    -
>> dbms/ticker-9.5  quota                 none                   default
>> dbms/ticker-9.5  reservation           none                   default
>> dbms/ticker-9.5  recordsize            128K                   default
>> dbms/ticker-9.5  mountpoint            /dbms/ticker-9.5       local
>> dbms/ticker-9.5  sharenfs              off                    default
>> dbms/ticker-9.5  checksum              on                     default
>> dbms/ticker-9.5  compression           lz4                   
>> inherited from dbms
>> dbms/ticker-9.5  atime                 off                   
>> inherited from dbms
>> dbms/ticker-9.5  logbias               throughput            
>> inherited from dbms
>>
>>
>> -- 
>> Karl Denninger
>> k...@denninger.net <mailto:k...@denninger.net>
>> /The Market Ticker/
>> /[S/MIME encrypted email preferred]/
>
Primarycache=all is the default; changing it ought to be contemplated
only under VERY specific circumstances.  In the case of a database if
you shut off "all" then an 8kb data read with 128kb blocksize will
result in reading 128kb (the block size), returning the requested piece
out of the 128kb and then /throwing away /the rest of the data read
since you prohibited it from going into the ARC.  That's
almost-certainly going to do bad things for throughput!

Note that having an L2ARC, which is the place where you might find
setting primarycache to have a benefit, is itself something you need to
instrument under your specific workload to see if its worth it.  If you
want to know if it *might* be worth it you can use (on FreeBSD)
zfs-stats -E; if you're seeing materially more than 15% cache misses
then it *might* help, assuming what you put it on is *very* fast (e.g. SSD)

In addition if you're on FreeBSD (and you say you are) be aware that the
vm system and ZFS interact in some "interesting" ways under certain load
profiles.  UMA is involved to a material degree in the issue.  I have
done quite a bit of work on the internal ZFS code in this regard; 11.x
is better-behaved than 10.x to a quite-material degree.  I have a patch
set out against both 10.x and 11.x that address some (but not all) of
the issues.

-- 
Karl Denninger
k...@denninger.net <mailto:k...@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to