I hope this issue has not been discussed before. My search on this topic did not turn up anything pertinent.
I have been compiling a computationally bound C++ program with MS Visual C++ and various GNU g++ versions for some time. Normally, g++ and MSVC produce similar run times with optimized code. However when moving from g++ 3.2 to 3.4 (and, I think, this also happened with 3.3 when I tried it some time ago), the run times increase about 2.5x. With 3.2 my regressions run in 2 hours (3.0G Pentium D using both cores), but run in over 5 hours with 3.4. I can not find anything in particular that is causing the slowdown. Both were compiled with identical flags. An example of all flags is shown below. g++ -I /sw/dev/tcl/include -O3 -pedantic -Wpointer-arith -Wsign-compare -Wredundant-decls -Wwrite-strings -Wconversion -Wall -W -Wshadow -Wno-unused -Wundef -Wchar-subscripts -Wno-long-long -Wformat -ffloat-store -DWIN32 -D__BORLANDC__=0 -DNEW_HEADER -DTURQUOISE -DARCH="\"x86-win\"" -c ../../../dev/src/build.cpp The link command is: g++ dp.a -lm -ltcl84 -ltk84 -o dp dp.a contains all of the compiled objects The code is c++ code but does not use any standard library objects. Here are the detailed g++ versions for the test I performed. I used Windows XP/Intel as the test environment. g++ (GCC) 3.2 20020927 (prerelease) g++ (GCC) 3.4.4 (cygming special) (gdc 0.12, using dmd 0.125) Thanks for any help and suggestions, Peter parmailbox-news1 AT yahoo DOT com _______________________________________________ Help-gplusplus mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gplusplus
