Your thread should be equal to or lower than the number of slabs.

The thread count seems extremely high, you should not need so many.  You should 
set num-queries-per-thread.  Try 16384

Can you also paste your memory settings and Cache settings?


From: sir izake <[email protected]>
Date: Tuesday, 25 November 2025 at 8:08 pm
To: Seth Van Buren <[email protected]>
Cc: [email protected] <[email protected]>
Subject: Re: How to measure cache hit resolution time in unbound 1.24.1

Hi Seth

 num-threads: 64
msg-cache-slabs: 32
rrset-cache-slabs: 32
infra-cache-slabs: 32
key-cache-slabs: 32
ratelimit-slabs: 32
ip-ratelimit-slabs: 32

The physical server is a dell 640 with specs below

hw.ncpu: 104
hw.model: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz

Thank you
Isaac


On Tue, Nov 25, 2025 at 5:13 AM Seth Van Buren 
<[email protected]<mailto:[email protected]>> wrote:
Home many cores/slabs are you using?

From: Unbound-users 
<[email protected]<mailto:[email protected]>>
 on behalf of sir izake via Unbound-users 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, 25 November 2025 at 2:51 pm
To: [email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>>
Subject: How to measure cache hit resolution time in unbound 1.24.1

Hi

I have installed unbound 1.24.1 on FreeBSD 14.3 OS. My cache hit rate is 76% 
with over 20% coming through recursive replies.

The median time for recursive replies is 440ms while the avg is 520ms.

This setup has been running for over 72hrs. I expect stats to improve but that 
is not happening.

Just wanted to find out if there is a way to measure the cache hit resolution 
time in a dashboard?

Can I do anything to improve cache hit ratio?

Can I also improve the recursive reply time?

I am using unbound_exporter to monitor stats in grafana

My configs have been adjusted as follows:
rrset-cache-size: 20G
msg-cache-size: 10G
cache-min-ttl: 1800

I am using the root hint files directly on the server for recursive lookup and 
not forwarding to any public resolver

Thank you

Regards,
Isaac


Reply via email to