Hi Jeremy,

Thanks for your kind response.
On Thu, 24 May 2012, Ben wrote:

version: 9.7.3-P3-RedHat-9.7.3-8.P3.el6_2.2
CPUs found: 8
worker threads: 8
number of zones: 19
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is ON
recursive clients: 6400/29900/30000
tcp clients: 0/100
server is up and running


i constanly watch rndc status command , and at recuresive-clients tab ,
first values increases maximum up to 6000-6500, why it is not going to
maximum which i define 30000..?
I don't know why it never reached the maximum. resperf should try to
scale up to attempting 100,000 questions in its last second. (At 60th
second I think; the final 40 seconds is waiting for responses.) It only
tries 74038 during its total time, but I am not sure what is limiting
it.

Maybe your datafile is not unique enough? Maybe your source port range
is not large enough? So then BIND 9 is matching existing requests and
dropping.
My source port range is
cat /proc/sys/net/ipv4/ip_local_port_range
1024    65535

I downloaded data file from resperf provider site.
It depends a lot on the dataset. (I think I have seen around 17,000
queries with resperf and as low as 236 qps -- in this case it was
depending on number of ACLs.)
I do not using more acl for testing purpose.
I don't know why you have the burst of "operation canceled". (The
ISC_R_CANCELED can happen from different problems.)
Please suggest us that what are reasons generate "operation canceled" error comes in named.run log file
rndc status shows 8 worker process, when i checked  by pgrep named , it
shows only single instance.so does it need to show 8 instance or ?
8 worker threads is different than 8 processes.

Currently we use bind as caching name server , so why rndc status shows
number of zones 19..?
The 19 zones are built-in zones. (See the ARM for the list.)

By the way, to set some comparison maximum baseline you can try having
resperf query the built-in zones. (It won't be real recursive work, but
should show you some potential maximum qps.)

Is there anything which we need to mind on OS kernel tuning parameters or from bind configuration side to achieve more QPS?

By the way, what is highest benchmark for bind with no. of QPS in production servers?

I would request you , if someone has getting high QPS with bind in production servers, kindly suggest your inputs.


   Jeremy C. Reed
   ISC
Regards,
Ben
_______________________________________________
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Reply via email to