limit for cache-size?

2010-01-04 Thread Thomas Vogt

Hello

Are there any limits in bind 9.6.* or 9.7.* for cache-size or know 
issues? I'm planing to use 8GB ram for named cache.


Regards,
Thomas Vogt
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: limit for cache-size?

2010-01-04 Thread Alan Clegg

Thomas Vogt wrote:

Are there any limits in bind 9.6.* or 9.7.* for cache-size or know 
issues? I'm planing to use 8GB ram for named cache.


The LRU cache cleaning introduced in BIND 9.5.0 should make your large 
cache work as expected.


AlanC
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: File Descriptor limit and malfunction bind

2010-01-04 Thread Kevin Darcy

What's your hard limit (ulimit -n -H)?

named seems to use, by default, the OS hard limit on file descriptors, 
even though the ARM says The default is |unlimited|. . When it starts 
up as superuser, in theory it should be able to set both the hard and 
soft limit to infinity, but it doesn't appear to be doing that, at 
least it doesn't on Solaris.


If you want to raise the limit on files, the recommended way is to use a 
files clause in the options statement. This might not work too well, 
however, if you try to raise the limit beyond the OS-defined hard limit, 
on a running named process which has dropped its superuser privileges. 
You'd need to restart in that case.



  - Kevin

**
Ram Akuka wrote:

Hi ,
i have a high load DNS server running bind 9.4.3 on RH -
yesterday we experienced a problem with the bind  (the bind froze) , 
and when looking at the logs i saw the following error :

named error: socket: file descriptor exceeds limit (4096/4096)
i looked at my OS file descriptor limit and using ulimit -n   - 1024 .
where the number 4096 come from?

BTW
the named i'm running using 7 cpus (-n 7 when start).



please advice ,


--
Ram


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: BIND 9.6.1-P1 crashing

2010-01-04 Thread JINMEI Tatuya / 神明達哉
At Wed, 30 Dec 2009 10:23:17 +0100,
Dario Miculinic dario.miculi...@t-com.hr wrote:

 I'm administrating 4 DNS servers running CentOS release 5.4 and Red Hat 
 Enterprise Linux Server release 5.2. with BIND 
 version 9.6.1-P1. On 3 of them BIND crashed 7 times in last 10 days. There's 
 nothing in log files, but we have core dump 
 file. I found this in the core dump:
 
 #0  0x080db986 in ttl_sooner (v1=0x0, v2=0x3385b628) at rbtdb.c:752
 752 ttl_sooner(void *v1, void *v2) {
 (gdb) where
 #0  0x080db986 in ttl_sooner (v1=0x0, v2=0x3385b628) at rbtdb.c:752

What's the result of the following gdb command?

(gdb) thread apply all bt full

We've seen crash like this one, but we've not figured out how this
happens.  This is pretty likely an inter-thread race, and it may be
tricky.  According to the v1/v2 values in your stack trace, a full
backtrace with information of other threads may provide more useful
hint.

If you need immediate workaround rather than chasing the bug,
rebuilding named with --disable-atomic may help (we cannot be sure
because we don't yet know how this bug happens in the first place).
This will use locks in a more conservative way and may avoid the
tricky race condition at the cost of lower performance (so if you want
to try that you'll also need to watch the server load).

---
JINMEI, Tatuya
Internet Systems Consortium, Inc.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: File Descriptor limit and malfunction bind

2010-01-04 Thread Shumon Huque
On Mon, Jan 04, 2010 at 01:43:52PM -0500, Kevin Darcy wrote:
 
 named seems to use, by default, the OS hard limit on file descriptors, 
 even though the ARM says The default is |unlimited|. . When it starts 
 up as superuser, in theory it should be able to set both the hard and 
 soft limit to infinity, but it doesn't appear to be doing that, at 
 least it doesn't on Solaris.

This is not my experience on Solaris 10. According to the code, if
undefined in the config file, it's raising them to RLIM_INFINITY 
(lib/isc/unix/resource.c), and that's what I observe on my servers:

$ plimit `pgrep named`
23385:  /usr/local/sbin/named
   resource  current maximum
  time(seconds) unlimited   unlimited
  file(blocks)  unlimited   unlimited
  data(kbytes)  unlimited   unlimited
  stack(kbytes) unlimited   unlimited
  coredump(blocks)  unlimited   unlimited
  nofiles(descriptors)  unlimited   unlimited
  vmemory(kbytes)   unlimited   unlimited

The invoking environment had nofiles settings of 256 (soft) and
65536 (hard) respectively, which appear to be the OS defaults.

--Shumon.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users