So, I rebuilt the glibc 2.23 from the 16.04 sources and modified the
values written to the adapt_count parm in the lock elision code. It's a
short and the original code may store values 0, 1, 2, 3. We were seeing
either 1 (canary hit in constructor) or 0 (canary hit in destructor). I
changed it to
An update on my experiments:
* 500 runs no failures with TLE disabled
* 500 runs no failures TLE enabled but mprotect() syscall in Canary
constructor/destructor
* 500 runs 11 failed with TLE enabled so about 2% fail rate
* Tried switching SMT off and interestingly got 200 runs no fails with TLE
Question: Is there any magic I can do to this test case:
python buildscripts/resmoke.py --suites=concurrency_sharded
--storageEngine=wiredTiger --excludeWithAnyTags=requires_mmapv1
--dbpathPrefix=... --repeat=500 --continueOnFailure
that would allow me to run multiple copies on the same machine?
This is the other thing I am trying. I've modified the Canary object to
use a 128k stack zone and then use mprotect to mark the aligned 64k page
that's in the middle of it read-only. When the destructor is called, it
changes it back to read-write. This should cause any write to this
region to get
One other thing, if you use the mprotect thing, it may be necessary to
bump up the value of /proc/sys/vm/max_map_count, depending on how many
of these Canary objects get constructed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Andrew,
Yes, that is working nicely with separate DB dirs and basePort I'm running
multiple copies on one machine. Thanks!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1640518
Title:
MongoDB
Found it, looks like the --basePort option to resmoke is what I want.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1640518
Title:
MongoDB Memory corruption
To manage notifications about this bug