On 09/20/2015 04:19 AM, Pádraig Brady wrote:
> On 18/09/15 12:47, Pádraig Brady wrote:
>> Another gotcha with ulimit is that setting it too low
>> can disable any locale specific functionality,
>> because setlocale() will fail below 120M in testing here,
>> in which case we proceed in the "C" locale.
>>
>> For example, testing the recent fix for the sort -M mem leak,
>> I was surprised that I couldn't trigger with:
>>
>>   yes | (ulimit -v15000 && strace -e brk sort -c -M >/dev/null)
>>
>> until I realized I needed to _increase_ the limit
>>
>>   yes | (ulimit -v150000 && strace -e brk sort -c -M >/dev/null)
>>

wow, that is excessive.

>> p.s. using valgrind for mem leak checking is less general
>> because it would depend on "lint" being defined to avoid all
>> "definitely lost" warnings.
> 
> BTW the 100MB virtual mem size increase from setlocale()
> is caused by mmaping the pre-processed locale archive
> at /usr/lib/locale/locale-archive.  That's CoW'd and so
> doesn't use any extra RAM per process, but still
> impacts ulimit -v.

I can not reproduce this effect here.
Given the complexity of side effects of 'ulimit -v'-based tests
in this case, I'm almost inclinded to go back to a valgrind'ed
test which only runs when sort was compiled with -Dlint.

Would an adaptive 'ulimit -v'-based approach work, i.e. first
getting the basic memory needed by e.g. "echo hello | sort -c -M",
and then use that limit (plus a little more) for a run with much
more input?

Thanks & have a nice day,
Berny



Reply via email to