I suspect that "ulimit -d" does not work correctly, so I wrote a test program, which simply allocates 4K chunks and writes an int into each. It keeps a count, and prints the amount it was able to allocate in kB.
(Yes, I know there's malloc overhead and other things; the point is just to see if limits are working at get a feeling for the malloc overhead and other data, to sanity check using limits in a program that's too complicated to understand 100% reliably.) The tests below are on reasonably recent netbsd-10 amd64, that has 32G of RAM. I limited the test program to 4GB (to avoid provoking unrelated zfs locking bugs I've written about before). Running it with my default limits: number of threads (-T) 8192 socket buffer size (bytes, -b) unlimited core file size (blocks, -c) unlimited data seg size (kbytes, -d) 8388608 file size (blocks, -f) unlimited max locked memory (kbytes, -l) 10740156 max memory size (kbytes, -m) 32220468 open files (-n) 20000 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 4096 cpu time (seconds, -t) unlimited max user processes (-u) 1044 virtual memory (kbytes, -v) unlimited got me 4194304 I didn't want to mess up my shell's limits, so I started a new /bin/sh, and reduced the data limit to 1024 and the memory limit to 1024. The unit is kB, so this is a mere 1MB of memory. Which is plenty to run the test program: text data bss dec hex filename 2600 578 72 3250 cb2 test-data But, I still can allocate 4GB: $ ulimit -a time (-t seconds ) unlimited file (-f blocks ) unlimited data (-d kbytes ) 8388608 stack (-s kbytes ) 4096 coredump (-c blocks ) unlimited memory (-m kbytes ) 32220468 locked memory (-l kbytes ) 10740156 thread (-r threads ) 8192 process (-p processes ) 1044 nofiles (-n descriptors) 20000 vmemory (-v kbytes ) unlimited sbsize (-b bytes ) unlimited $ ulimit -d 1024 $ ulimit -a time (-t seconds ) unlimited file (-f blocks ) unlimited data (-d kbytes ) 1024 stack (-s kbytes ) 4096 coredump (-c blocks ) unlimited memory (-m kbytes ) 32220468 locked memory (-l kbytes ) 10740156 thread (-r threads ) 8192 process (-p processes ) 1044 nofiles (-n descriptors) 20000 vmemory (-v kbytes ) unlimited sbsize (-b bytes ) unlimited $ ./test-data 4194304 $ ulimit -m 1024 $ ulimit -a time (-t seconds ) unlimited file (-f blocks ) unlimited data (-d kbytes ) 1024 stack (-s kbytes ) 4096 coredump (-c blocks ) unlimited memory (-m kbytes ) 1024 locked memory (-l kbytes ) 10740156 thread (-r threads ) 8192 process (-p processes ) 1044 nofiles (-n descriptors) 20000 vmemory (-v kbytes ) unlimited sbsize (-b bytes ) unlimited $ ./test-data 4194304 Questioning my program, I ran top in another window and saw "4067M" in the RSS column. Yet in that shell in that state, "unison" fails: $ unison sh: unison: not enough memory So it seems NetBSD won't exec a process that would exceed the limit at exec time, but malloc is able to obtain virtual memory (which is backed by actual memory) that is not constrained. background (skippable): I am trying to test unison (a file sync tool), to find out how much memory it needs for various workloads. I wrote a script to set limits via ulimit(1), both data and stack, sticking to POSIX, and invoked unison. I then did binary search manually, to find the smallest limits that worked. For stack, it was 88, 100, and 212, for 10000, 20000, and 40000 files. That wasn't really surprising. For data, it was 1376. That's kB, and it didn't seem remarkable. But, it was the same for varying workloads. ---------------------------------------- /* * test-data.c: Test data size limit. * Allocate memory until failure. * Print amount allocated. */ #include <stdio.h> #include <stdlib.h> #define BUFSIZK 4 int main(int argc, char **argv) { int i; int *buf; /* * Limit to 4 GB, to avoid problems if data segment limits are not * working as expected. */ for (i = 0; i < (4 * 1024 * 1024) / BUFSIZK; i++) { /* Allocate, write to force a page, and discard. */ buf = malloc(BUFSIZK * 1024); if (buf == NULL) break; buf[0] = 0; buf = NULL; } /* Print in kB. */ printf("%d\n", i * BUFSIZK); }