ldd ./DelphinSolver gives:
linux-vdso.so.1 => (0x00007ffdc833d000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(0x00007fbe9eee9000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbe9ebe0000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
(0x00007fbe9e9c9000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbe9e600000)
/lib64/ld-linux-x86-64.so.2 (0x0000563a11082000)
(where DelphinSolver is my numerical solver engine)
So the bug probably is related to packages:
- libstdc++6:amd64
- libc6:amd64
- libgcc1:amd64
** Description changed:
I noticed that a numerical solver I develop runs much slower on 16.04.1
than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does
the same on both variants. The timings (lower part, beginning with
FrameworkTimeWriteOutputs) are execution time in seconds. Overall time
is in the last row (WallClockTime). First column shows results on Ubuntu
14.04, second column shows time on 16.04.1.
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
-
- Particularly affected is the physics part of the code
(IntegratorTimeFunctionEvals), which does by far the most memory access and
uses pow(), sqrt(), exp() functions.
+ Particularly affected is the physics part of the code
+ (IntegratorTimeFunctionEvals), which does by far the most memory access
+ and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run
unmodified on 16.04 (after upgrade and on a second machine after a fresh
install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the
execution times are approximately the same as with GCC 4.8.4 on Ubuntu
16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the
"New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to
avoid this performance drop?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1613996
Title:
30% slowdown in numerical solver exection on 16.04.1 vs. 14.04 with
same solver binary
To manage notifications about this bug go to:
https://bugs.launchpad.net/gcc/+bug/1613996/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs