On Sat, Feb 09, 2019 at 12:01:39PM +0100, Otto Moerbeek wrote:
> Why is this a wall? Do your mmaps start failing? With what error code?

Well 13G isn't the wall, but I had tried the entire /usr/share/dict/words as
A records which would have given more than 200K RRSET's which would have
blown up this SIZE considerably since the 30K RRSET's were 13G.  

The mmaps failed but due to a stupidity in my own code I could not glance the
real errno value (I'll show you why):

----->
              /* does not exist, create it */

                map = (char *)mmap(NULL, SIZENODE, PROT_READ|PROT_WRITE, MAP_PRI
VATE | MAP_ANON, -1, 0);
                if (map == MAP_FAILED) {
                        errno = EINVAL;
                        return -1;
                }

<-----

I should have logged it before reusing errno, I'll do some tests and get back
to you.  I am currently not at my workstation as I went and visited my parents
since writing the original mail, and I won't be back until monday or so, so any
changing of this code  will have to wait until then.

> You should be able (given ulimits) to mmap up to MAXDSIZ (32G on
> amd64) per process.

I noticed that RLIMIT_DATA limits had given me problems too.  Sizing them to
16G had allowed me to work with the 30K RRSETs.

> If you want to reduce SIZE, call munmap(). That's the only way. You
> cannot have virtual memory pages that are not accounted for in SIZE.
> 
>       -Otto

Ahh ok thanks for clearing that up.  It looks like I'll have to rewrite the
way I store internal data then, if I want to use a LOT of RRSETs in the future.
May be better for me too.

Thanks Otto!

Best Regards,
-peter

Reply via email to