Re: [Valgrind-users] valgrind always fails with out of memory

2020-09-12 Thread Philippe Waroquiers
On Tue, 2020-09-08 at 14:09 +0200, Mario Emmenlauer wrote:
> On 08.09.20 12:25, Mario Emmenlauer wrote:
> > On 08.09.20 12:04, Mario Emmenlauer wrote:
> > > The error I get most frequently is (full output attached in log.txt)
> > > ==32== Valgrind's memory management: out of memory:
> > > ==32==newSuperblock's request for 6864695621860790272 bytes 
> > > failed.
> > > ==32==  114,106,368 bytes have already been mmap-ed ANONYMOUS.
> > 
> > Argh! After sending the email, I went through the stack trace for
> > the hundredth time, and spotted the use of "zlib". And indeed, when
> > replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind
> > works as expected!
> > 
> > Does that make sense? Is zlib used by valgrind itself? And why could
> > my debug build differ (so much) from the system zlib that it breaks
> > valgrind? I double-checked and its the identical source code from
> > Ubuntu, just missing two or three patches.
> 
> So it seems I can (partially) answer my own question: when valgrind
> is used on an executable that links zlib built with -ggdb3, then it
> does not work (due to aforementioned error). Keeping all other debug-
> settings except -ggdb3 works still fine.
> 
> I have no clue as to _why_ this may happen, but I hope it can be
> helpful to other people running into the same issue.
zlib is not used by the valgrind tools. In fact, valgrind tools
do not use any library (even not libc).

The above newSuperblock trace shows that a *huge* block is requested.
As this bug only happens when you use -ggdb3, this is likely a problem
in the debuginfo reader of valgrind: some debug info generated by 
-ggdb3 is very probably not handled properly.

I have recompiled libz with -ggdb3, but no problem when running this
lib under valgrind.

We might have a more clear idea of what happens on your side 
by adding some trace.

The best is to file a bug on bugzilla, and attach the output of
running valgrind with -d -d -d -v -v -v.

That might give some information about what is wrong
and possibly some more detailed trace can then be activated.

Thanks
Philippe




___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] valgrind always fails with out of memory

2020-09-08 Thread Mario Emmenlauer
On 08.09.20 12:25, Mario Emmenlauer wrote:
> 
> On 08.09.20 12:04, Mario Emmenlauer wrote:
>> The error I get most frequently is (full output attached in log.txt)
>> ==32== Valgrind's memory management: out of memory:
>> ==32==newSuperblock's request for 6864695621860790272 bytes failed.
>> ==32==  114,106,368 bytes have already been mmap-ed ANONYMOUS.
> 
> Argh! After sending the email, I went through the stack trace for
> the hundredth time, and spotted the use of "zlib". And indeed, when
> replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind
> works as expected!
> 
> Does that make sense? Is zlib used by valgrind itself? And why could
> my debug build differ (so much) from the system zlib that it breaks
> valgrind? I double-checked and its the identical source code from
> Ubuntu, just missing two or three patches.

So it seems I can (partially) answer my own question: when valgrind
is used on an executable that links zlib built with -ggdb3, then it
does not work (due to aforementioned error). Keeping all other debug-
settings except -ggdb3 works still fine.

I have no clue as to _why_ this may happen, but I hope it can be
helpful to other people running into the same issue.

All the best,

Mario Emmenlauer



___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


[Valgrind-users] valgrind always fails with out of memory

2020-09-08 Thread Mario Emmenlauer

Dear All,

many years ago, I've been using valgrind frequently and successfully,
admittedly without ever giving it much thought! Thanks for the awesome
tool.

Now I'm setting up a larger CI system and want automatic memcheck for
our tests. However, in the whole past year, I could not get a single
successful run. So I must be doing something very wrong. Help would be
greatly appreciated :-(


The error I get most frequently is (full output attached in log.txt)
==32== Valgrind's memory management: out of memory:
==32==newSuperblock's request for 6864695621860790272 bytes failed.
==32==  114,106,368 bytes have already been mmap-ed ANONYMOUS.


Here is what I tried so far:
 - Versions valgrind-3.13.0 from Ubuntu 18.04 and valgrind-3.16.1
   compiled from source
 - Executed valgrind in a docker container running Ubuntu 18.04 x86_64
   and Ubuntu 20.04 x86_64
 - Checked `ulimit -a` in Docker, there are no tight limits
 - Tried valgrind with some 50++ different executables, all lead to
   the same error message
 - Tried valgrind outside Docker, leads to the same error message
 - Checked `ulimit -a` outside Docker, there are no tight limits
 - Tried the tests work successfully when _not_ using valgrind


I have also tried valgrind on other executables than our debug builds,
and it seems to work there without problems. So maybe the errors are
related to how we create debug builds?

We make pretty standard debug builds (I assume), with flags
-ggdb3 -fno-omit-frame-pointer -O1 -m64 -march=nehalem -mtune=haswell.
Are some of these suspicious?

The host machines I have tried are relatively modern desktop computers
with 64GB of RAM, and modern Skylake or Ryzen processors. The OS is
typically Ubuntu 18.04 or 20.04. I have not set up any tight permission
restrictions like selinux (unless it would be the default for Ubuntu).

And ideas for what I can try are more than appreciated!

All the best,

Mario Emmenlauer
==33== Memcheck, a memory error detector
==33== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==33== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==33== Command: 
/data/bdaci01/BioDataAnalysis/stable-tmp-Ubuntu-Skylake-20.04-x86_64-gcc9/Debug/BDAImageAnalysis/VigraWrapperTest
==33== 
--33:0: aspacem <<< SHOW_SEGMENTS: out_of_memory (105 segments)
--33:0: aspacem 15 segment names in 15 slots
--33:0: aspacem freelist is empty
--33:0: aspacem (0,4,9) 
/data/bdaci01/BioDataAnalysis/stable-artifacts-Ubuntu-Skylake-20.04-x86_64-gcc9/Tools/lib/valgrind/memcheck-amd64-linux
--33:0: aspacem (1,128,7) 
/data/bdaci01/BioDataAnalysis/stable-tmp-Ubuntu-Skylake-20.04-x86_64-gcc9/Debug/BDAImageAnalysis/VigraWrapperTest
--33:0: aspacem (2,246,7) /usr/lib/x86_64-linux-gnu/ld-2.31.so
--33:0: aspacem (3,287,1) 
/tmp/vgdb-pipe-shared-mem-vgdb-33-by-???-on-172436f18aee
--33:0: aspacem (4,348,7) 
/data/bdaci01/BioDataAnalysis/stable-artifacts-Ubuntu-Skylake-20.04-x86_64-gcc9/Tools/lib/valgrind/vgpreload_core-amd64-linux.so
--33:0: aspacem (5,481,7) 
/data/bdaci01/BioDataAnalysis/stable-artifacts-Ubuntu-Skylake-20.04-x86_64-gcc9/Tools/lib/valgrind/vgpreload_memcheck-amd64-linux.so
--33:0: aspacem (6,618,1) /etc/ld.so.cache
--33:0: aspacem (7,639,7) /usr/lib/x86_64-linux-gnu/libm-2.31.so
--33:0: aspacem (8,682,7) 
/data/bdaci01/BioDataAnalysis/stable-artifacts-Ubuntu-Skylake-20.04-x86_64-gcc9/Debug/lib/libQt5Core.so.5.15.0
--33:0: aspacem (9,797,7) /usr/lib/x86_64-linux-gnu/libdl-2.31.so
--33:0: aspacem (10,841,8) /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28
--33:0: aspacem (11,891,7) /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
--33:0: aspacem (12,935,7) /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
--33:0: aspacem (13,984,8) /usr/lib/x86_64-linux-gnu/libc-2.31.so
--33:0: aspacem (14,1027,7) 
/data/bdaci01/BioDataAnalysis/stable-artifacts-Ubuntu-Skylake-20.04-x86_64-gcc9/Debug/lib/libz.so.1.2.11
--33:0: aspacem   0: RSVN 00-107fff 1081344 - SmFixed
--33:0: aspacem   1: file 108000-116fff   61440 r d=0x811 
i=143002127 o=0   (1,128)
--33:0: aspacem   2: file 117000-1d0fff  761856 r-x-- d=0x811 
i=143002127 o=61440   (1,128)
--33:0: aspacem   3: file 1d1000-1fbfff  176128 r d=0x811 
i=143002127 o=823296  (1,128)
--33:0: aspacem   4: file 1fc000-200fff   20480 rw--- d=0x811 
i=143002127 o=995328  (1,128)
--33:0: aspacem   5: anon 201000-201fff4096 rw---
--33:0: aspacem   6: RSVN 202000-0003ff 61m - SmFixed
--33:0: aspacem   7: file 000400-0004000fff4096 r d=0x05e 
i=95955233 o=0   (2,246)
--33:0: aspacem   8: file 0004001000-0004023fff  143360 r-xT- d=0x05e 
i=95955233 o=4096(2,246)
--33:0: aspacem   9: file 0004024000-000402bfff   32768 r d=0x05e 
i=95955233 o=147456  (2,246)
--33:0: aspacem  10:  000402c000-000402cfff4096
--33:0: aspacem  11: file 000402d000-000402efff8192 rw--- d=0x05e 
i=95955233 o=180224  (2,246)
--33:0: aspacem  12: anon 

Re: [Valgrind-users] valgrind always fails with out of memory

2020-09-08 Thread Mario Emmenlauer

On 08.09.20 12:04, Mario Emmenlauer wrote:
> The error I get most frequently is (full output attached in log.txt)
> ==32== Valgrind's memory management: out of memory:
> ==32==newSuperblock's request for 6864695621860790272 bytes failed.
> ==32==  114,106,368 bytes have already been mmap-ed ANONYMOUS.

Argh! After sending the email, I went through the stack trace for
the hundredth time, and spotted the use of "zlib". And indeed, when
replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind
works as expected!

Does that make sense? Is zlib used by valgrind itself? And why could
my debug build differ (so much) from the system zlib that it breaks
valgrind? I double-checked and its the identical source code from
Ubuntu, just missing two or three patches.

All the best,

Mario Emmenlauer


--
BioDataAnalysis GmbH, Mario Emmenlauer  Tel. Buero: +49-89-74677203
Balanstr. 43   mailto: memmenlauer * biodataanalysis.de
D-81669 München  http://www.biodataanalysis.de/


___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users