OK, here is my bad code: > cat rlimit.c #include <stdio.h> #include <sys/time.h> #include <sys/resource.h>
int main(){ struct rlimit limit; getrlimit(RLIMIT_NOFILE,&limit); printf("%d \n",limit.rlim_max); } When I just run it I get 8192: > ./rlimit.o 8192 I do get a smaller limit (1024) under Valgrind: > valgrind ./rlimit.o ==1070== Memcheck, a memory error detector ==1070== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==1070== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==1070== Command: ./rlimit.o ==1070== 1024 ==1070== ==1070== HEAP SUMMARY: ==1070== in use at exit: 0 bytes in 0 blocks ==1070== total heap usage: 0 allocs, 0 frees, 0 bytes allocated ==1070== ==1070== All heap blocks were freed -- no leaks are possible ==1070== ==1070== For counts of detected and suppressed errors, rerun with: -v ==1070== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4) So I change my script to set to a lower value (100). > cat foo #!/bin/sh ulimit -n 100 Still doesn't work: > valgrind ./foo ==1071== Memcheck, a memory error detector ==1071== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==1071== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==1071== Command: ./foo ==1071== ./foo: line 2: ulimit: open files: cannot modify limit: Operation not permitted Assuming I am doing something wrong and this small example should work, is there a way to raise the soft/hard limits that valgrind uses? Also, if the problem was that I was exceeding the hard limit, shouldn't the error be "Limit exceeded" rather than "Operation not permitted". Costas -----Original Message----- From: Tom Hughes [mailto:t...@compton.nu] Sent: Monday, October 06, 2014 6:37 PM To: Skarakis, Konstantinos; valgrind-users@lists.sourceforge.net Subject: Re: ulimit under valgrind On 06/10/14 16:22, Tom Hughes wrote: > On 06/10/14 16:08, Skarakis, Konstantinos wrote: > >> I am having trouble running programs that need to use "ulimit" under >> valgrind. For instance, here is a simple script that changes the limit of >> open files to 100000. All commands below are executed as root: >> >>> cat foo >> #!/bin/sh >> ulimit -n 100000 >> >> I can run it fine on its own. >> >>> ./foo >>> echo $? >> 0 >> >> But under valgrind I get blocked with "Operation not permitted": > > Because valgrind needs to reserve a few file descriptors for it's own > use it effectively reduces the hard limit slightly, so you won't be > able to raise your own soft limit above that reduced hard limit. Actually it's worse than that, we allocate the 10 reserved descriptors immediately above the soft limit (assuming there is space) and effectively convert the initial soft limit to a hard limit. Tom -- Tom Hughes (t...@compton.nu) http://compton.nu/ ------------------------------------------------------------------------------ Slashdot TV. Videos for Nerds. Stuff that Matters. http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk _______________________________________________ Valgrind-users mailing list Valgrind-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/valgrind-users